00:00:00.014 Started by upstream project "autotest-per-patch" build number 127087 00:00:00.014 originally caused by: 00:00:00.015 Started by user sys_sgci 00:00:00.088 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.089 The recommended git tool is: git 00:00:00.089 using credential 00000000-0000-0000-0000-000000000002 00:00:00.091 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.133 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.188 Using shallow fetch with depth 1 00:00:00.188 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.188 > git --version # timeout=10 00:00:00.228 > git --version # 'git version 2.39.2' 00:00:00.228 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.260 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.260 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.556 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.567 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.580 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.580 > git config core.sparsecheckout # timeout=10 00:00:05.591 > git read-tree -mu HEAD # timeout=10 00:00:05.609 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.667 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.667 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:05.775 [Pipeline] Start of Pipeline 00:00:05.791 [Pipeline] library 00:00:05.793 Loading library shm_lib@master 00:00:05.793 Library shm_lib@master is cached. Copying from home. 00:00:05.807 [Pipeline] node 00:00:20.809 Still waiting to schedule task 00:00:20.810 Waiting for next available executor on ‘vagrant-vm-host’ 00:05:18.133 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:18.135 [Pipeline] { 00:05:18.146 [Pipeline] catchError 00:05:18.148 [Pipeline] { 00:05:18.163 [Pipeline] wrap 00:05:18.174 [Pipeline] { 00:05:18.182 [Pipeline] stage 00:05:18.184 [Pipeline] { (Prologue) 00:05:18.206 [Pipeline] echo 00:05:18.207 Node: VM-host-SM16 00:05:18.214 [Pipeline] cleanWs 00:05:18.223 [WS-CLEANUP] Deleting project workspace... 00:05:18.223 [WS-CLEANUP] Deferred wipeout is used... 00:05:18.228 [WS-CLEANUP] done 00:05:18.407 [Pipeline] setCustomBuildProperty 00:05:18.493 [Pipeline] httpRequest 00:05:18.516 [Pipeline] echo 00:05:18.517 Sorcerer 10.211.164.101 is alive 00:05:18.525 [Pipeline] httpRequest 00:05:18.529 HttpMethod: GET 00:05:18.529 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:05:18.530 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:05:18.531 Response Code: HTTP/1.1 200 OK 00:05:18.531 Success: Status code 200 is in the accepted range: 200,404 00:05:18.532 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:05:18.674 [Pipeline] sh 00:05:18.951 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:05:18.966 [Pipeline] httpRequest 00:05:18.984 [Pipeline] echo 00:05:18.986 Sorcerer 10.211.164.101 is alive 00:05:19.000 [Pipeline] httpRequest 00:05:19.005 HttpMethod: GET 00:05:19.005 URL: http://10.211.164.101/packages/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:05:19.006 Sending request to url: http://10.211.164.101/packages/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:05:19.007 Response Code: HTTP/1.1 200 OK 00:05:19.007 Success: Status code 200 is in the accepted range: 200,404 00:05:19.007 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:05:21.186 [Pipeline] sh 00:05:21.462 + tar --no-same-owner -xf spdk_dca21ec0f3ec663fb113b85210d70609a04f98a9.tar.gz 00:05:24.754 [Pipeline] sh 00:05:25.030 + git -C spdk log --oneline -n5 00:05:25.030 dca21ec0f scripts/nvmf_perf: confirm set system settings 00:05:25.030 77f816207 scripts/nvmf_perf: modify set_pause_frames 00:05:25.030 81767f27c scripts/nvmf_perf: check all config file sections are present 00:05:25.030 166db62dc scripts/nvmf_perf: disable fio group reporting 00:05:25.030 dc3b3835d scripts/nvmf_perf: use dataclasses for collecting results data 00:05:25.045 [Pipeline] writeFile 00:05:25.059 [Pipeline] sh 00:05:25.337 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:05:25.347 [Pipeline] sh 00:05:25.624 + cat autorun-spdk.conf 00:05:25.624 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:25.624 SPDK_TEST_NVMF=1 00:05:25.624 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:25.624 SPDK_TEST_USDT=1 00:05:25.624 SPDK_TEST_NVMF_MDNS=1 00:05:25.624 SPDK_RUN_UBSAN=1 00:05:25.624 NET_TYPE=virt 00:05:25.624 SPDK_JSONRPC_GO_CLIENT=1 00:05:25.624 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:25.631 RUN_NIGHTLY=0 00:05:25.632 [Pipeline] } 00:05:25.646 [Pipeline] // stage 00:05:25.660 [Pipeline] stage 00:05:25.662 [Pipeline] { (Run VM) 00:05:25.672 [Pipeline] sh 00:05:25.946 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:05:25.947 + echo 'Start stage prepare_nvme.sh' 00:05:25.947 Start stage prepare_nvme.sh 00:05:25.947 + [[ -n 5 ]] 00:05:25.947 + disk_prefix=ex5 00:05:25.947 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:05:25.947 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:05:25.947 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:05:25.947 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:25.947 ++ SPDK_TEST_NVMF=1 00:05:25.947 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:25.947 ++ SPDK_TEST_USDT=1 00:05:25.947 ++ SPDK_TEST_NVMF_MDNS=1 00:05:25.947 ++ SPDK_RUN_UBSAN=1 00:05:25.947 ++ NET_TYPE=virt 00:05:25.947 ++ SPDK_JSONRPC_GO_CLIENT=1 00:05:25.947 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:25.947 ++ RUN_NIGHTLY=0 00:05:25.947 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:25.947 + nvme_files=() 00:05:25.947 + declare -A nvme_files 00:05:25.947 + backend_dir=/var/lib/libvirt/images/backends 00:05:25.947 + nvme_files['nvme.img']=5G 00:05:25.947 + nvme_files['nvme-cmb.img']=5G 00:05:25.947 + nvme_files['nvme-multi0.img']=4G 00:05:25.947 + nvme_files['nvme-multi1.img']=4G 00:05:25.947 + nvme_files['nvme-multi2.img']=4G 00:05:25.947 + nvme_files['nvme-openstack.img']=8G 00:05:25.947 + nvme_files['nvme-zns.img']=5G 00:05:25.947 + (( SPDK_TEST_NVME_PMR == 1 )) 00:05:25.947 + (( SPDK_TEST_FTL == 1 )) 00:05:25.947 + (( SPDK_TEST_NVME_FDP == 1 )) 00:05:25.947 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:05:25.947 + for nvme in "${!nvme_files[@]}" 00:05:25.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:05:25.947 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:05:25.947 + for nvme in "${!nvme_files[@]}" 00:05:25.947 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:05:26.540 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:05:26.540 + for nvme in "${!nvme_files[@]}" 00:05:26.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:05:26.540 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:05:26.540 + for nvme in "${!nvme_files[@]}" 00:05:26.540 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:05:26.798 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:05:26.798 + for nvme in "${!nvme_files[@]}" 00:05:26.798 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:05:26.798 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:05:26.798 + for nvme in "${!nvme_files[@]}" 00:05:26.798 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:05:26.798 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:05:26.798 + for nvme in "${!nvme_files[@]}" 00:05:26.798 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:05:27.365 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:05:27.365 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:05:27.365 + echo 'End stage prepare_nvme.sh' 00:05:27.365 End stage prepare_nvme.sh 00:05:27.375 [Pipeline] sh 00:05:27.652 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:05:27.652 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:05:27.652 00:05:27.652 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:05:27.652 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:05:27.652 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:27.652 HELP=0 00:05:27.652 DRY_RUN=0 00:05:27.652 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:05:27.652 NVME_DISKS_TYPE=nvme,nvme, 00:05:27.652 NVME_AUTO_CREATE=0 00:05:27.652 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:05:27.652 NVME_CMB=,, 00:05:27.652 NVME_PMR=,, 00:05:27.652 NVME_ZNS=,, 00:05:27.652 NVME_MS=,, 00:05:27.652 NVME_FDP=,, 00:05:27.652 SPDK_VAGRANT_DISTRO=fedora38 00:05:27.652 SPDK_VAGRANT_VMCPU=10 00:05:27.652 SPDK_VAGRANT_VMRAM=12288 00:05:27.652 SPDK_VAGRANT_PROVIDER=libvirt 00:05:27.652 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:05:27.652 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:05:27.652 SPDK_OPENSTACK_NETWORK=0 00:05:27.652 VAGRANT_PACKAGE_BOX=0 00:05:27.652 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:05:27.652 FORCE_DISTRO=true 00:05:27.652 VAGRANT_BOX_VERSION= 00:05:27.652 EXTRA_VAGRANTFILES= 00:05:27.652 NIC_MODEL=e1000 00:05:27.652 00:05:27.652 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:05:27.652 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:05:30.936 Bringing machine 'default' up with 'libvirt' provider... 00:05:31.506 ==> default: Creating image (snapshot of base box volume). 00:05:31.764 ==> default: Creating domain with the following settings... 00:05:31.764 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721840777_fbb0a46e674e87cf73eb 00:05:31.764 ==> default: -- Domain type: kvm 00:05:31.764 ==> default: -- Cpus: 10 00:05:31.764 ==> default: -- Feature: acpi 00:05:31.764 ==> default: -- Feature: apic 00:05:31.764 ==> default: -- Feature: pae 00:05:31.764 ==> default: -- Memory: 12288M 00:05:31.764 ==> default: -- Memory Backing: hugepages: 00:05:31.764 ==> default: -- Management MAC: 00:05:31.764 ==> default: -- Loader: 00:05:31.764 ==> default: -- Nvram: 00:05:31.764 ==> default: -- Base box: spdk/fedora38 00:05:31.764 ==> default: -- Storage pool: default 00:05:31.764 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721840777_fbb0a46e674e87cf73eb.img (20G) 00:05:31.764 ==> default: -- Volume Cache: default 00:05:31.764 ==> default: -- Kernel: 00:05:31.764 ==> default: -- Initrd: 00:05:31.764 ==> default: -- Graphics Type: vnc 00:05:31.764 ==> default: -- Graphics Port: -1 00:05:31.764 ==> default: -- Graphics IP: 127.0.0.1 00:05:31.764 ==> default: -- Graphics Password: Not defined 00:05:31.764 ==> default: -- Video Type: cirrus 00:05:31.764 ==> default: -- Video VRAM: 9216 00:05:31.764 ==> default: -- Sound Type: 00:05:31.764 ==> default: -- Keymap: en-us 00:05:31.764 ==> default: -- TPM Path: 00:05:31.764 ==> default: -- INPUT: type=mouse, bus=ps2 00:05:31.764 ==> default: -- Command line args: 00:05:31.764 ==> default: -> value=-device, 00:05:31.764 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:05:31.764 ==> default: -> value=-drive, 00:05:31.764 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:05:31.764 ==> default: -> value=-device, 00:05:31.764 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:31.764 ==> default: -> value=-device, 00:05:31.764 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:05:31.764 ==> default: -> value=-drive, 00:05:31.764 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:05:31.764 ==> default: -> value=-device, 00:05:31.764 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:31.764 ==> default: -> value=-drive, 00:05:31.764 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:05:31.764 ==> default: -> value=-device, 00:05:31.764 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:31.764 ==> default: -> value=-drive, 00:05:31.764 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:05:31.764 ==> default: -> value=-device, 00:05:31.764 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:05:32.022 ==> default: Creating shared folders metadata... 00:05:32.022 ==> default: Starting domain. 00:05:33.922 ==> default: Waiting for domain to get an IP address... 00:05:48.834 ==> default: Waiting for SSH to become available... 00:05:50.270 ==> default: Configuring and enabling network interfaces... 00:05:55.537 default: SSH address: 192.168.121.144:22 00:05:55.537 default: SSH username: vagrant 00:05:55.537 default: SSH auth method: private key 00:05:57.435 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:06:05.539 ==> default: Mounting SSHFS shared folder... 00:06:06.914 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:06:06.914 ==> default: Checking Mount.. 00:06:08.287 ==> default: Folder Successfully Mounted! 00:06:08.287 ==> default: Running provisioner: file... 00:06:08.855 default: ~/.gitconfig => .gitconfig 00:06:09.423 00:06:09.423 SUCCESS! 00:06:09.423 00:06:09.423 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:06:09.423 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:06:09.423 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:06:09.423 00:06:09.433 [Pipeline] } 00:06:09.452 [Pipeline] // stage 00:06:09.464 [Pipeline] dir 00:06:09.465 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:06:09.467 [Pipeline] { 00:06:09.483 [Pipeline] catchError 00:06:09.485 [Pipeline] { 00:06:09.501 [Pipeline] sh 00:06:09.778 + vagrant ssh-config --host vagrant 00:06:09.778 + sed -ne /^Host/,$p 00:06:09.778 + tee ssh_conf 00:06:13.966 Host vagrant 00:06:13.966 HostName 192.168.121.144 00:06:13.966 User vagrant 00:06:13.966 Port 22 00:06:13.966 UserKnownHostsFile /dev/null 00:06:13.966 StrictHostKeyChecking no 00:06:13.966 PasswordAuthentication no 00:06:13.966 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:06:13.966 IdentitiesOnly yes 00:06:13.966 LogLevel FATAL 00:06:13.966 ForwardAgent yes 00:06:13.966 ForwardX11 yes 00:06:13.966 00:06:13.979 [Pipeline] withEnv 00:06:13.981 [Pipeline] { 00:06:13.993 [Pipeline] sh 00:06:14.272 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:06:14.272 source /etc/os-release 00:06:14.272 [[ -e /image.version ]] && img=$(< /image.version) 00:06:14.272 # Minimal, systemd-like check. 00:06:14.272 if [[ -e /.dockerenv ]]; then 00:06:14.272 # Clear garbage from the node's name: 00:06:14.272 # agt-er_autotest_547-896 -> autotest_547-896 00:06:14.272 # $HOSTNAME is the actual container id 00:06:14.273 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:06:14.273 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:06:14.273 # We can assume this is a mount from a host where container is running, 00:06:14.273 # so fetch its hostname to easily identify the target swarm worker. 00:06:14.273 container="$(< /etc/hostname) ($agent)" 00:06:14.273 else 00:06:14.273 # Fallback 00:06:14.273 container=$agent 00:06:14.273 fi 00:06:14.273 fi 00:06:14.273 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:06:14.273 00:06:14.542 [Pipeline] } 00:06:14.563 [Pipeline] // withEnv 00:06:14.572 [Pipeline] setCustomBuildProperty 00:06:14.587 [Pipeline] stage 00:06:14.589 [Pipeline] { (Tests) 00:06:14.607 [Pipeline] sh 00:06:14.881 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:06:15.153 [Pipeline] sh 00:06:15.434 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:06:15.449 [Pipeline] timeout 00:06:15.450 Timeout set to expire in 40 min 00:06:15.452 [Pipeline] { 00:06:15.469 [Pipeline] sh 00:06:15.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:06:16.314 HEAD is now at dca21ec0f scripts/nvmf_perf: confirm set system settings 00:06:16.330 [Pipeline] sh 00:06:16.608 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:06:16.879 [Pipeline] sh 00:06:17.159 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:06:17.179 [Pipeline] sh 00:06:17.457 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:06:17.716 ++ readlink -f spdk_repo 00:06:17.716 + DIR_ROOT=/home/vagrant/spdk_repo 00:06:17.716 + [[ -n /home/vagrant/spdk_repo ]] 00:06:17.716 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:06:17.716 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:06:17.716 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:06:17.716 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:06:17.716 + [[ -d /home/vagrant/spdk_repo/output ]] 00:06:17.716 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:06:17.716 + cd /home/vagrant/spdk_repo 00:06:17.716 + source /etc/os-release 00:06:17.716 ++ NAME='Fedora Linux' 00:06:17.716 ++ VERSION='38 (Cloud Edition)' 00:06:17.716 ++ ID=fedora 00:06:17.716 ++ VERSION_ID=38 00:06:17.716 ++ VERSION_CODENAME= 00:06:17.716 ++ PLATFORM_ID=platform:f38 00:06:17.716 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:06:17.716 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:17.716 ++ LOGO=fedora-logo-icon 00:06:17.716 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:06:17.716 ++ HOME_URL=https://fedoraproject.org/ 00:06:17.716 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:06:17.716 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:17.716 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:17.716 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:17.716 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:06:17.716 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:17.716 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:06:17.716 ++ SUPPORT_END=2024-05-14 00:06:17.716 ++ VARIANT='Cloud Edition' 00:06:17.716 ++ VARIANT_ID=cloud 00:06:17.716 + uname -a 00:06:17.716 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:06:17.716 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:17.974 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.974 Hugepages 00:06:17.974 node hugesize free / total 00:06:17.974 node0 1048576kB 0 / 0 00:06:17.974 node0 2048kB 0 / 0 00:06:17.974 00:06:17.974 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:17.974 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:18.232 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:18.232 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:18.232 + rm -f /tmp/spdk-ld-path 00:06:18.233 + source autorun-spdk.conf 00:06:18.233 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:18.233 ++ SPDK_TEST_NVMF=1 00:06:18.233 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:18.233 ++ SPDK_TEST_USDT=1 00:06:18.233 ++ SPDK_TEST_NVMF_MDNS=1 00:06:18.233 ++ SPDK_RUN_UBSAN=1 00:06:18.233 ++ NET_TYPE=virt 00:06:18.233 ++ SPDK_JSONRPC_GO_CLIENT=1 00:06:18.233 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:18.233 ++ RUN_NIGHTLY=0 00:06:18.233 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:18.233 + [[ -n '' ]] 00:06:18.233 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:06:18.233 + for M in /var/spdk/build-*-manifest.txt 00:06:18.233 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:18.233 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:18.233 + for M in /var/spdk/build-*-manifest.txt 00:06:18.233 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:18.233 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:06:18.233 ++ uname 00:06:18.233 + [[ Linux == \L\i\n\u\x ]] 00:06:18.233 + sudo dmesg -T 00:06:18.233 + sudo dmesg --clear 00:06:18.233 + dmesg_pid=5273 00:06:18.233 + [[ Fedora Linux == FreeBSD ]] 00:06:18.233 + sudo dmesg -Tw 00:06:18.233 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:18.233 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:18.233 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:18.233 + [[ -x /usr/src/fio-static/fio ]] 00:06:18.233 + export FIO_BIN=/usr/src/fio-static/fio 00:06:18.233 + FIO_BIN=/usr/src/fio-static/fio 00:06:18.233 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:18.233 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:18.233 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:18.233 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:18.233 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:18.233 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:18.233 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:18.233 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:18.233 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:18.233 Test configuration: 00:06:18.233 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:18.233 SPDK_TEST_NVMF=1 00:06:18.233 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:18.233 SPDK_TEST_USDT=1 00:06:18.233 SPDK_TEST_NVMF_MDNS=1 00:06:18.233 SPDK_RUN_UBSAN=1 00:06:18.233 NET_TYPE=virt 00:06:18.233 SPDK_JSONRPC_GO_CLIENT=1 00:06:18.233 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:06:18.233 RUN_NIGHTLY=0 17:07:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:18.233 17:07:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:18.233 17:07:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:18.233 17:07:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:18.233 17:07:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.233 17:07:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.233 17:07:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.233 17:07:04 -- paths/export.sh@5 -- $ export PATH 00:06:18.233 17:07:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:18.233 17:07:04 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:06:18.233 17:07:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:06:18.233 17:07:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721840824.XXXXXX 00:06:18.233 17:07:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721840824.OJ4ITL 00:06:18.233 17:07:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:06:18.233 17:07:04 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:06:18.233 17:07:04 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:06:18.233 17:07:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:06:18.233 17:07:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:06:18.558 17:07:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:06:18.558 17:07:04 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:06:18.558 17:07:04 -- common/autotest_common.sh@10 -- $ set +x 00:06:18.558 17:07:04 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:06:18.558 17:07:04 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:06:18.558 17:07:04 -- pm/common@17 -- $ local monitor 00:06:18.558 17:07:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.558 17:07:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:18.558 17:07:04 -- pm/common@25 -- $ sleep 1 00:06:18.558 17:07:04 -- pm/common@21 -- $ date +%s 00:06:18.558 17:07:04 -- pm/common@21 -- $ date +%s 00:06:18.558 17:07:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721840824 00:06:18.558 17:07:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721840824 00:06:18.558 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721840824_collect-vmstat.pm.log 00:06:18.558 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721840824_collect-cpu-load.pm.log 00:06:19.494 17:07:05 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:06:19.494 17:07:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:19.494 17:07:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:19.494 17:07:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:06:19.494 17:07:05 -- spdk/autobuild.sh@16 -- $ date -u 00:06:19.494 Wed Jul 24 05:07:05 PM UTC 2024 00:06:19.494 17:07:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:19.494 v24.09-pre-323-gdca21ec0f 00:06:19.494 17:07:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:19.494 17:07:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:19.494 17:07:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:19.494 17:07:05 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:19.494 17:07:05 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:19.494 17:07:05 -- common/autotest_common.sh@10 -- $ set +x 00:06:19.494 ************************************ 00:06:19.494 START TEST ubsan 00:06:19.494 ************************************ 00:06:19.494 using ubsan 00:06:19.494 17:07:05 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:06:19.494 00:06:19.494 real 0m0.000s 00:06:19.494 user 0m0.000s 00:06:19.494 sys 0m0.000s 00:06:19.494 17:07:05 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:19.494 17:07:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:19.494 ************************************ 00:06:19.494 END TEST ubsan 00:06:19.494 ************************************ 00:06:19.494 17:07:05 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:19.494 17:07:05 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:19.494 17:07:05 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:19.494 17:07:05 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:19.494 17:07:05 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:19.494 17:07:05 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:19.494 17:07:05 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:19.494 17:07:05 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:19.494 17:07:05 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:06:19.494 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:19.494 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:20.060 Using 'verbs' RDMA provider 00:06:35.507 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:06:47.741 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:06:47.741 go version go1.21.1 linux/amd64 00:06:47.741 Creating mk/config.mk...done. 00:06:47.741 Creating mk/cc.flags.mk...done. 00:06:47.741 Type 'make' to build. 00:06:47.741 17:07:33 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:06:47.741 17:07:33 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:06:47.741 17:07:33 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:06:47.741 17:07:33 -- common/autotest_common.sh@10 -- $ set +x 00:06:47.741 ************************************ 00:06:47.741 START TEST make 00:06:47.741 ************************************ 00:06:47.741 17:07:33 make -- common/autotest_common.sh@1125 -- $ make -j10 00:06:47.741 make[1]: Nothing to be done for 'all'. 00:07:02.612 The Meson build system 00:07:02.612 Version: 1.3.1 00:07:02.612 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:07:02.612 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:07:02.612 Build type: native build 00:07:02.612 Program cat found: YES (/usr/bin/cat) 00:07:02.612 Project name: DPDK 00:07:02.612 Project version: 24.03.0 00:07:02.612 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:07:02.612 C linker for the host machine: cc ld.bfd 2.39-16 00:07:02.612 Host machine cpu family: x86_64 00:07:02.612 Host machine cpu: x86_64 00:07:02.612 Message: ## Building in Developer Mode ## 00:07:02.612 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:02.612 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:07:02.612 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:02.612 Program python3 found: YES (/usr/bin/python3) 00:07:02.612 Program cat found: YES (/usr/bin/cat) 00:07:02.612 Compiler for C supports arguments -march=native: YES 00:07:02.612 Checking for size of "void *" : 8 00:07:02.612 Checking for size of "void *" : 8 (cached) 00:07:02.612 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:07:02.612 Library m found: YES 00:07:02.612 Library numa found: YES 00:07:02.612 Has header "numaif.h" : YES 00:07:02.612 Library fdt found: NO 00:07:02.612 Library execinfo found: NO 00:07:02.612 Has header "execinfo.h" : YES 00:07:02.612 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:07:02.612 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:02.612 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:02.612 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:02.613 Run-time dependency openssl found: YES 3.0.9 00:07:02.613 Run-time dependency libpcap found: YES 1.10.4 00:07:02.613 Has header "pcap.h" with dependency libpcap: YES 00:07:02.613 Compiler for C supports arguments -Wcast-qual: YES 00:07:02.613 Compiler for C supports arguments -Wdeprecated: YES 00:07:02.613 Compiler for C supports arguments -Wformat: YES 00:07:02.613 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:02.613 Compiler for C supports arguments -Wformat-security: NO 00:07:02.613 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:02.613 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:02.613 Compiler for C supports arguments -Wnested-externs: YES 00:07:02.613 Compiler for C supports arguments -Wold-style-definition: YES 00:07:02.613 Compiler for C supports arguments -Wpointer-arith: YES 00:07:02.613 Compiler for C supports arguments -Wsign-compare: YES 00:07:02.613 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:02.613 Compiler for C supports arguments -Wundef: YES 00:07:02.613 Compiler for C supports arguments -Wwrite-strings: YES 00:07:02.613 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:02.613 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:02.613 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:02.613 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:02.613 Program objdump found: YES (/usr/bin/objdump) 00:07:02.613 Compiler for C supports arguments -mavx512f: YES 00:07:02.613 Checking if "AVX512 checking" compiles: YES 00:07:02.613 Fetching value of define "__SSE4_2__" : 1 00:07:02.613 Fetching value of define "__AES__" : 1 00:07:02.613 Fetching value of define "__AVX__" : 1 00:07:02.613 Fetching value of define "__AVX2__" : 1 00:07:02.613 Fetching value of define "__AVX512BW__" : (undefined) 00:07:02.613 Fetching value of define "__AVX512CD__" : (undefined) 00:07:02.613 Fetching value of define "__AVX512DQ__" : (undefined) 00:07:02.613 Fetching value of define "__AVX512F__" : (undefined) 00:07:02.613 Fetching value of define "__AVX512VL__" : (undefined) 00:07:02.613 Fetching value of define "__PCLMUL__" : 1 00:07:02.613 Fetching value of define "__RDRND__" : 1 00:07:02.613 Fetching value of define "__RDSEED__" : 1 00:07:02.613 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:07:02.613 Fetching value of define "__znver1__" : (undefined) 00:07:02.613 Fetching value of define "__znver2__" : (undefined) 00:07:02.613 Fetching value of define "__znver3__" : (undefined) 00:07:02.613 Fetching value of define "__znver4__" : (undefined) 00:07:02.613 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:02.613 Message: lib/log: Defining dependency "log" 00:07:02.613 Message: lib/kvargs: Defining dependency "kvargs" 00:07:02.613 Message: lib/telemetry: Defining dependency "telemetry" 00:07:02.613 Checking for function "getentropy" : NO 00:07:02.613 Message: lib/eal: Defining dependency "eal" 00:07:02.613 Message: lib/ring: Defining dependency "ring" 00:07:02.613 Message: lib/rcu: Defining dependency "rcu" 00:07:02.613 Message: lib/mempool: Defining dependency "mempool" 00:07:02.613 Message: lib/mbuf: Defining dependency "mbuf" 00:07:02.613 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:02.613 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:07:02.613 Compiler for C supports arguments -mpclmul: YES 00:07:02.613 Compiler for C supports arguments -maes: YES 00:07:02.613 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:02.613 Compiler for C supports arguments -mavx512bw: YES 00:07:02.613 Compiler for C supports arguments -mavx512dq: YES 00:07:02.613 Compiler for C supports arguments -mavx512vl: YES 00:07:02.613 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:02.613 Compiler for C supports arguments -mavx2: YES 00:07:02.613 Compiler for C supports arguments -mavx: YES 00:07:02.613 Message: lib/net: Defining dependency "net" 00:07:02.613 Message: lib/meter: Defining dependency "meter" 00:07:02.613 Message: lib/ethdev: Defining dependency "ethdev" 00:07:02.613 Message: lib/pci: Defining dependency "pci" 00:07:02.613 Message: lib/cmdline: Defining dependency "cmdline" 00:07:02.613 Message: lib/hash: Defining dependency "hash" 00:07:02.613 Message: lib/timer: Defining dependency "timer" 00:07:02.613 Message: lib/compressdev: Defining dependency "compressdev" 00:07:02.613 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:02.613 Message: lib/dmadev: Defining dependency "dmadev" 00:07:02.613 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:02.613 Message: lib/power: Defining dependency "power" 00:07:02.613 Message: lib/reorder: Defining dependency "reorder" 00:07:02.613 Message: lib/security: Defining dependency "security" 00:07:02.613 Has header "linux/userfaultfd.h" : YES 00:07:02.613 Has header "linux/vduse.h" : YES 00:07:02.613 Message: lib/vhost: Defining dependency "vhost" 00:07:02.613 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:02.613 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:02.613 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:02.613 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:02.613 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:02.613 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:02.613 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:02.613 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:02.613 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:02.613 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:02.613 Program doxygen found: YES (/usr/bin/doxygen) 00:07:02.613 Configuring doxy-api-html.conf using configuration 00:07:02.613 Configuring doxy-api-man.conf using configuration 00:07:02.613 Program mandb found: YES (/usr/bin/mandb) 00:07:02.613 Program sphinx-build found: NO 00:07:02.613 Configuring rte_build_config.h using configuration 00:07:02.613 Message: 00:07:02.613 ================= 00:07:02.613 Applications Enabled 00:07:02.613 ================= 00:07:02.613 00:07:02.613 apps: 00:07:02.613 00:07:02.613 00:07:02.613 Message: 00:07:02.613 ================= 00:07:02.613 Libraries Enabled 00:07:02.613 ================= 00:07:02.613 00:07:02.613 libs: 00:07:02.613 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:02.613 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:02.613 cryptodev, dmadev, power, reorder, security, vhost, 00:07:02.613 00:07:02.613 Message: 00:07:02.613 =============== 00:07:02.613 Drivers Enabled 00:07:02.613 =============== 00:07:02.613 00:07:02.613 common: 00:07:02.613 00:07:02.613 bus: 00:07:02.613 pci, vdev, 00:07:02.613 mempool: 00:07:02.613 ring, 00:07:02.613 dma: 00:07:02.613 00:07:02.613 net: 00:07:02.613 00:07:02.613 crypto: 00:07:02.613 00:07:02.613 compress: 00:07:02.613 00:07:02.613 vdpa: 00:07:02.613 00:07:02.613 00:07:02.613 Message: 00:07:02.613 ================= 00:07:02.613 Content Skipped 00:07:02.613 ================= 00:07:02.613 00:07:02.613 apps: 00:07:02.613 dumpcap: explicitly disabled via build config 00:07:02.613 graph: explicitly disabled via build config 00:07:02.613 pdump: explicitly disabled via build config 00:07:02.613 proc-info: explicitly disabled via build config 00:07:02.613 test-acl: explicitly disabled via build config 00:07:02.613 test-bbdev: explicitly disabled via build config 00:07:02.613 test-cmdline: explicitly disabled via build config 00:07:02.613 test-compress-perf: explicitly disabled via build config 00:07:02.613 test-crypto-perf: explicitly disabled via build config 00:07:02.613 test-dma-perf: explicitly disabled via build config 00:07:02.613 test-eventdev: explicitly disabled via build config 00:07:02.613 test-fib: explicitly disabled via build config 00:07:02.613 test-flow-perf: explicitly disabled via build config 00:07:02.613 test-gpudev: explicitly disabled via build config 00:07:02.613 test-mldev: explicitly disabled via build config 00:07:02.613 test-pipeline: explicitly disabled via build config 00:07:02.613 test-pmd: explicitly disabled via build config 00:07:02.613 test-regex: explicitly disabled via build config 00:07:02.613 test-sad: explicitly disabled via build config 00:07:02.613 test-security-perf: explicitly disabled via build config 00:07:02.613 00:07:02.613 libs: 00:07:02.613 argparse: explicitly disabled via build config 00:07:02.613 metrics: explicitly disabled via build config 00:07:02.613 acl: explicitly disabled via build config 00:07:02.613 bbdev: explicitly disabled via build config 00:07:02.613 bitratestats: explicitly disabled via build config 00:07:02.613 bpf: explicitly disabled via build config 00:07:02.613 cfgfile: explicitly disabled via build config 00:07:02.613 distributor: explicitly disabled via build config 00:07:02.613 efd: explicitly disabled via build config 00:07:02.613 eventdev: explicitly disabled via build config 00:07:02.613 dispatcher: explicitly disabled via build config 00:07:02.613 gpudev: explicitly disabled via build config 00:07:02.613 gro: explicitly disabled via build config 00:07:02.613 gso: explicitly disabled via build config 00:07:02.613 ip_frag: explicitly disabled via build config 00:07:02.613 jobstats: explicitly disabled via build config 00:07:02.613 latencystats: explicitly disabled via build config 00:07:02.613 lpm: explicitly disabled via build config 00:07:02.613 member: explicitly disabled via build config 00:07:02.613 pcapng: explicitly disabled via build config 00:07:02.614 rawdev: explicitly disabled via build config 00:07:02.614 regexdev: explicitly disabled via build config 00:07:02.614 mldev: explicitly disabled via build config 00:07:02.614 rib: explicitly disabled via build config 00:07:02.614 sched: explicitly disabled via build config 00:07:02.614 stack: explicitly disabled via build config 00:07:02.614 ipsec: explicitly disabled via build config 00:07:02.614 pdcp: explicitly disabled via build config 00:07:02.614 fib: explicitly disabled via build config 00:07:02.614 port: explicitly disabled via build config 00:07:02.614 pdump: explicitly disabled via build config 00:07:02.614 table: explicitly disabled via build config 00:07:02.614 pipeline: explicitly disabled via build config 00:07:02.614 graph: explicitly disabled via build config 00:07:02.614 node: explicitly disabled via build config 00:07:02.614 00:07:02.614 drivers: 00:07:02.614 common/cpt: not in enabled drivers build config 00:07:02.614 common/dpaax: not in enabled drivers build config 00:07:02.614 common/iavf: not in enabled drivers build config 00:07:02.614 common/idpf: not in enabled drivers build config 00:07:02.614 common/ionic: not in enabled drivers build config 00:07:02.614 common/mvep: not in enabled drivers build config 00:07:02.614 common/octeontx: not in enabled drivers build config 00:07:02.614 bus/auxiliary: not in enabled drivers build config 00:07:02.614 bus/cdx: not in enabled drivers build config 00:07:02.614 bus/dpaa: not in enabled drivers build config 00:07:02.614 bus/fslmc: not in enabled drivers build config 00:07:02.614 bus/ifpga: not in enabled drivers build config 00:07:02.614 bus/platform: not in enabled drivers build config 00:07:02.614 bus/uacce: not in enabled drivers build config 00:07:02.614 bus/vmbus: not in enabled drivers build config 00:07:02.614 common/cnxk: not in enabled drivers build config 00:07:02.614 common/mlx5: not in enabled drivers build config 00:07:02.614 common/nfp: not in enabled drivers build config 00:07:02.614 common/nitrox: not in enabled drivers build config 00:07:02.614 common/qat: not in enabled drivers build config 00:07:02.614 common/sfc_efx: not in enabled drivers build config 00:07:02.614 mempool/bucket: not in enabled drivers build config 00:07:02.614 mempool/cnxk: not in enabled drivers build config 00:07:02.614 mempool/dpaa: not in enabled drivers build config 00:07:02.614 mempool/dpaa2: not in enabled drivers build config 00:07:02.614 mempool/octeontx: not in enabled drivers build config 00:07:02.614 mempool/stack: not in enabled drivers build config 00:07:02.614 dma/cnxk: not in enabled drivers build config 00:07:02.614 dma/dpaa: not in enabled drivers build config 00:07:02.614 dma/dpaa2: not in enabled drivers build config 00:07:02.614 dma/hisilicon: not in enabled drivers build config 00:07:02.614 dma/idxd: not in enabled drivers build config 00:07:02.614 dma/ioat: not in enabled drivers build config 00:07:02.614 dma/skeleton: not in enabled drivers build config 00:07:02.614 net/af_packet: not in enabled drivers build config 00:07:02.614 net/af_xdp: not in enabled drivers build config 00:07:02.614 net/ark: not in enabled drivers build config 00:07:02.614 net/atlantic: not in enabled drivers build config 00:07:02.614 net/avp: not in enabled drivers build config 00:07:02.614 net/axgbe: not in enabled drivers build config 00:07:02.614 net/bnx2x: not in enabled drivers build config 00:07:02.614 net/bnxt: not in enabled drivers build config 00:07:02.614 net/bonding: not in enabled drivers build config 00:07:02.614 net/cnxk: not in enabled drivers build config 00:07:02.614 net/cpfl: not in enabled drivers build config 00:07:02.614 net/cxgbe: not in enabled drivers build config 00:07:02.614 net/dpaa: not in enabled drivers build config 00:07:02.614 net/dpaa2: not in enabled drivers build config 00:07:02.614 net/e1000: not in enabled drivers build config 00:07:02.614 net/ena: not in enabled drivers build config 00:07:02.614 net/enetc: not in enabled drivers build config 00:07:02.614 net/enetfec: not in enabled drivers build config 00:07:02.614 net/enic: not in enabled drivers build config 00:07:02.614 net/failsafe: not in enabled drivers build config 00:07:02.614 net/fm10k: not in enabled drivers build config 00:07:02.614 net/gve: not in enabled drivers build config 00:07:02.614 net/hinic: not in enabled drivers build config 00:07:02.614 net/hns3: not in enabled drivers build config 00:07:02.614 net/i40e: not in enabled drivers build config 00:07:02.614 net/iavf: not in enabled drivers build config 00:07:02.614 net/ice: not in enabled drivers build config 00:07:02.614 net/idpf: not in enabled drivers build config 00:07:02.614 net/igc: not in enabled drivers build config 00:07:02.614 net/ionic: not in enabled drivers build config 00:07:02.614 net/ipn3ke: not in enabled drivers build config 00:07:02.614 net/ixgbe: not in enabled drivers build config 00:07:02.614 net/mana: not in enabled drivers build config 00:07:02.614 net/memif: not in enabled drivers build config 00:07:02.614 net/mlx4: not in enabled drivers build config 00:07:02.614 net/mlx5: not in enabled drivers build config 00:07:02.614 net/mvneta: not in enabled drivers build config 00:07:02.614 net/mvpp2: not in enabled drivers build config 00:07:02.614 net/netvsc: not in enabled drivers build config 00:07:02.614 net/nfb: not in enabled drivers build config 00:07:02.614 net/nfp: not in enabled drivers build config 00:07:02.614 net/ngbe: not in enabled drivers build config 00:07:02.614 net/null: not in enabled drivers build config 00:07:02.614 net/octeontx: not in enabled drivers build config 00:07:02.614 net/octeon_ep: not in enabled drivers build config 00:07:02.614 net/pcap: not in enabled drivers build config 00:07:02.614 net/pfe: not in enabled drivers build config 00:07:02.614 net/qede: not in enabled drivers build config 00:07:02.614 net/ring: not in enabled drivers build config 00:07:02.614 net/sfc: not in enabled drivers build config 00:07:02.614 net/softnic: not in enabled drivers build config 00:07:02.614 net/tap: not in enabled drivers build config 00:07:02.614 net/thunderx: not in enabled drivers build config 00:07:02.614 net/txgbe: not in enabled drivers build config 00:07:02.614 net/vdev_netvsc: not in enabled drivers build config 00:07:02.614 net/vhost: not in enabled drivers build config 00:07:02.614 net/virtio: not in enabled drivers build config 00:07:02.614 net/vmxnet3: not in enabled drivers build config 00:07:02.614 raw/*: missing internal dependency, "rawdev" 00:07:02.614 crypto/armv8: not in enabled drivers build config 00:07:02.614 crypto/bcmfs: not in enabled drivers build config 00:07:02.614 crypto/caam_jr: not in enabled drivers build config 00:07:02.614 crypto/ccp: not in enabled drivers build config 00:07:02.614 crypto/cnxk: not in enabled drivers build config 00:07:02.614 crypto/dpaa_sec: not in enabled drivers build config 00:07:02.614 crypto/dpaa2_sec: not in enabled drivers build config 00:07:02.614 crypto/ipsec_mb: not in enabled drivers build config 00:07:02.614 crypto/mlx5: not in enabled drivers build config 00:07:02.614 crypto/mvsam: not in enabled drivers build config 00:07:02.614 crypto/nitrox: not in enabled drivers build config 00:07:02.614 crypto/null: not in enabled drivers build config 00:07:02.614 crypto/octeontx: not in enabled drivers build config 00:07:02.614 crypto/openssl: not in enabled drivers build config 00:07:02.614 crypto/scheduler: not in enabled drivers build config 00:07:02.614 crypto/uadk: not in enabled drivers build config 00:07:02.614 crypto/virtio: not in enabled drivers build config 00:07:02.614 compress/isal: not in enabled drivers build config 00:07:02.614 compress/mlx5: not in enabled drivers build config 00:07:02.614 compress/nitrox: not in enabled drivers build config 00:07:02.614 compress/octeontx: not in enabled drivers build config 00:07:02.614 compress/zlib: not in enabled drivers build config 00:07:02.614 regex/*: missing internal dependency, "regexdev" 00:07:02.614 ml/*: missing internal dependency, "mldev" 00:07:02.614 vdpa/ifc: not in enabled drivers build config 00:07:02.614 vdpa/mlx5: not in enabled drivers build config 00:07:02.614 vdpa/nfp: not in enabled drivers build config 00:07:02.614 vdpa/sfc: not in enabled drivers build config 00:07:02.614 event/*: missing internal dependency, "eventdev" 00:07:02.614 baseband/*: missing internal dependency, "bbdev" 00:07:02.614 gpu/*: missing internal dependency, "gpudev" 00:07:02.614 00:07:02.614 00:07:02.614 Build targets in project: 85 00:07:02.614 00:07:02.614 DPDK 24.03.0 00:07:02.614 00:07:02.614 User defined options 00:07:02.614 buildtype : debug 00:07:02.614 default_library : shared 00:07:02.614 libdir : lib 00:07:02.614 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:07:02.614 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:02.614 c_link_args : 00:07:02.614 cpu_instruction_set: native 00:07:02.614 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:07:02.614 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:07:02.614 enable_docs : false 00:07:02.614 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:07:02.614 enable_kmods : false 00:07:02.614 max_lcores : 128 00:07:02.614 tests : false 00:07:02.614 00:07:02.614 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:02.614 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:07:02.614 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:02.614 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:02.614 [3/268] Linking static target lib/librte_log.a 00:07:02.614 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:02.614 [5/268] Linking static target lib/librte_kvargs.a 00:07:02.614 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:02.614 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.615 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:02.615 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:02.615 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:02.615 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:02.615 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:02.615 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:02.615 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.615 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:02.615 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:02.615 [17/268] Linking static target lib/librte_telemetry.a 00:07:02.615 [18/268] Linking target lib/librte_log.so.24.1 00:07:02.615 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:02.615 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:02.615 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:02.615 [22/268] Linking target lib/librte_kvargs.so.24.1 00:07:02.873 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:02.873 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:02.873 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:02.873 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:02.873 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:02.873 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:02.873 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:03.130 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.130 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:03.130 [32/268] Linking target lib/librte_telemetry.so.24.1 00:07:03.130 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:03.386 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:03.386 [35/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:03.386 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:03.386 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:03.952 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:03.952 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:03.952 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:03.952 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:03.952 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:03.952 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:03.952 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:03.952 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:03.952 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:04.210 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:04.210 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:04.468 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:04.468 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:04.726 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:04.726 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:04.984 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:04.984 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:04.984 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:04.984 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:04.984 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:04.984 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:05.241 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:05.241 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:05.241 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:05.499 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:05.499 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:05.756 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:05.756 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:05.756 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:06.014 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:06.014 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:06.014 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:06.014 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:06.339 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:06.339 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:06.339 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:06.339 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:06.339 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:06.596 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:06.596 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:06.596 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:06.854 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:06.854 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:06.854 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:07.112 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:07.112 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:07.112 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:07.112 [85/268] Linking static target lib/librte_ring.a 00:07:07.369 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:07.369 [87/268] Linking static target lib/librte_eal.a 00:07:07.626 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:07.626 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:07.626 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:07.626 [91/268] Linking static target lib/librte_rcu.a 00:07:07.626 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:07.626 [93/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:07.884 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:07.884 [95/268] Linking static target lib/librte_mempool.a 00:07:07.884 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:07.884 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:07.884 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:08.141 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:08.399 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:08.399 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:07:08.399 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:08.399 [103/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:07:08.399 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:08.399 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:08.657 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:08.657 [107/268] Linking static target lib/librte_mbuf.a 00:07:08.657 [108/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:08.657 [109/268] Linking static target lib/librte_net.a 00:07:08.657 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:08.915 [111/268] Linking static target lib/librte_meter.a 00:07:09.173 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.173 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:09.173 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:09.173 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.173 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.173 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:09.431 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:09.689 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:09.689 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:09.946 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:10.205 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:10.205 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:10.463 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:10.463 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:10.463 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:10.463 [127/268] Linking static target lib/librte_pci.a 00:07:10.463 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:10.463 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:10.721 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:10.721 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:10.721 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:10.721 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:10.721 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:10.721 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:10.979 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:10.979 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:10.979 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:10.979 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:10.979 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:10.979 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:10.979 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:10.979 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:10.979 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:11.238 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:11.238 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:11.238 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:11.238 [148/268] Linking static target lib/librte_ethdev.a 00:07:11.238 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:11.238 [150/268] Linking static target lib/librte_cmdline.a 00:07:11.804 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:11.804 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:11.804 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:11.804 [154/268] Linking static target lib/librte_timer.a 00:07:11.804 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:11.804 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:11.804 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:12.369 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:12.369 [159/268] Linking static target lib/librte_hash.a 00:07:12.369 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:12.369 [161/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:12.369 [162/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:12.369 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:12.369 [164/268] Linking static target lib/librte_compressdev.a 00:07:12.627 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:12.627 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:12.886 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:12.886 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:12.886 [169/268] Linking static target lib/librte_dmadev.a 00:07:13.145 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.145 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:13.145 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:13.145 [173/268] Linking static target lib/librte_cryptodev.a 00:07:13.145 [174/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:13.145 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:13.404 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:13.404 [177/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.404 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.662 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:13.662 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:13.662 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:13.920 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:13.920 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.920 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:14.181 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:14.181 [186/268] Linking static target lib/librte_power.a 00:07:14.181 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:14.181 [188/268] Linking static target lib/librte_reorder.a 00:07:14.440 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:14.440 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:14.440 [191/268] Linking static target lib/librte_security.a 00:07:14.440 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:14.440 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:14.698 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.956 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:15.213 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.213 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.213 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:15.213 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:15.472 [200/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:15.472 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:15.472 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:15.729 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:15.729 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:15.987 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:15.987 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:15.987 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:16.245 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:16.245 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:16.245 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:16.245 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:16.245 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:16.503 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:16.504 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:16.504 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:16.504 [216/268] Linking static target drivers/librte_bus_pci.a 00:07:16.504 [217/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:16.504 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:16.504 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:16.504 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:16.504 [221/268] Linking static target drivers/librte_bus_vdev.a 00:07:16.504 [222/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:16.762 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:16.762 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:16.762 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:16.762 [226/268] Linking static target drivers/librte_mempool_ring.a 00:07:16.762 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.020 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:17.586 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:17.586 [230/268] Linking static target lib/librte_vhost.a 00:07:18.518 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.855 [232/268] Linking target lib/librte_eal.so.24.1 00:07:18.855 [233/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:18.855 [234/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:18.855 [235/268] Linking target lib/librte_timer.so.24.1 00:07:18.855 [236/268] Linking target lib/librte_ring.so.24.1 00:07:18.855 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:07:18.855 [238/268] Linking target lib/librte_dmadev.so.24.1 00:07:18.855 [239/268] Linking target lib/librte_pci.so.24.1 00:07:18.855 [240/268] Linking target lib/librte_meter.so.24.1 00:07:19.113 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:19.113 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:19.113 [243/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:19.113 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:19.113 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:19.113 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:19.113 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:07:19.113 [248/268] Linking target lib/librte_rcu.so.24.1 00:07:19.113 [249/268] Linking target lib/librte_mempool.so.24.1 00:07:19.372 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:19.372 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:19.372 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:07:19.372 [253/268] Linking target lib/librte_mbuf.so.24.1 00:07:19.630 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:19.630 [255/268] Linking target lib/librte_compressdev.so.24.1 00:07:19.630 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:07:19.630 [257/268] Linking target lib/librte_reorder.so.24.1 00:07:19.630 [258/268] Linking target lib/librte_net.so.24.1 00:07:19.888 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:19.888 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:19.888 [261/268] Linking target lib/librte_security.so.24.1 00:07:19.888 [262/268] Linking target lib/librte_hash.so.24.1 00:07:19.888 [263/268] Linking target lib/librte_cmdline.so.24.1 00:07:19.888 [264/268] Linking target lib/librte_ethdev.so.24.1 00:07:19.888 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:20.146 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:20.146 [267/268] Linking target lib/librte_power.so.24.1 00:07:20.146 [268/268] Linking target lib/librte_vhost.so.24.1 00:07:20.146 INFO: autodetecting backend as ninja 00:07:20.146 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:07:21.527 CC lib/log/log_flags.o 00:07:21.527 CC lib/log/log.o 00:07:21.527 CC lib/log/log_deprecated.o 00:07:21.527 CC lib/ut/ut.o 00:07:21.527 CC lib/ut_mock/mock.o 00:07:21.527 LIB libspdk_ut.a 00:07:21.527 LIB libspdk_ut_mock.a 00:07:21.527 LIB libspdk_log.a 00:07:21.527 SO libspdk_ut.so.2.0 00:07:21.527 SO libspdk_ut_mock.so.6.0 00:07:21.527 SO libspdk_log.so.7.0 00:07:21.527 SYMLINK libspdk_ut_mock.so 00:07:21.527 SYMLINK libspdk_ut.so 00:07:21.527 SYMLINK libspdk_log.so 00:07:21.786 CC lib/ioat/ioat.o 00:07:21.786 CC lib/util/bit_array.o 00:07:21.786 CC lib/util/base64.o 00:07:21.786 CXX lib/trace_parser/trace.o 00:07:21.786 CC lib/util/crc16.o 00:07:21.786 CC lib/util/crc32.o 00:07:21.786 CC lib/util/crc32c.o 00:07:21.786 CC lib/util/cpuset.o 00:07:21.786 CC lib/dma/dma.o 00:07:22.044 CC lib/vfio_user/host/vfio_user_pci.o 00:07:22.044 CC lib/util/crc32_ieee.o 00:07:22.044 CC lib/util/crc64.o 00:07:22.044 CC lib/util/dif.o 00:07:22.044 CC lib/util/fd.o 00:07:22.044 CC lib/util/fd_group.o 00:07:22.044 LIB libspdk_dma.a 00:07:22.044 CC lib/vfio_user/host/vfio_user.o 00:07:22.044 CC lib/util/file.o 00:07:22.044 SO libspdk_dma.so.4.0 00:07:22.044 CC lib/util/hexlify.o 00:07:22.303 LIB libspdk_ioat.a 00:07:22.303 SO libspdk_ioat.so.7.0 00:07:22.303 SYMLINK libspdk_dma.so 00:07:22.303 CC lib/util/iov.o 00:07:22.303 CC lib/util/math.o 00:07:22.303 CC lib/util/net.o 00:07:22.303 SYMLINK libspdk_ioat.so 00:07:22.303 CC lib/util/pipe.o 00:07:22.303 CC lib/util/strerror_tls.o 00:07:22.303 CC lib/util/string.o 00:07:22.303 LIB libspdk_vfio_user.a 00:07:22.303 CC lib/util/uuid.o 00:07:22.303 SO libspdk_vfio_user.so.5.0 00:07:22.303 CC lib/util/xor.o 00:07:22.561 SYMLINK libspdk_vfio_user.so 00:07:22.561 CC lib/util/zipf.o 00:07:22.561 LIB libspdk_util.a 00:07:22.818 SO libspdk_util.so.10.0 00:07:22.818 SYMLINK libspdk_util.so 00:07:23.076 LIB libspdk_trace_parser.a 00:07:23.076 SO libspdk_trace_parser.so.5.0 00:07:23.076 CC lib/idxd/idxd.o 00:07:23.076 CC lib/idxd/idxd_user.o 00:07:23.076 CC lib/idxd/idxd_kernel.o 00:07:23.076 CC lib/rdma_utils/rdma_utils.o 00:07:23.076 CC lib/env_dpdk/env.o 00:07:23.076 CC lib/vmd/vmd.o 00:07:23.076 CC lib/json/json_parse.o 00:07:23.076 CC lib/rdma_provider/common.o 00:07:23.076 CC lib/conf/conf.o 00:07:23.076 SYMLINK libspdk_trace_parser.so 00:07:23.076 CC lib/vmd/led.o 00:07:23.334 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:23.334 CC lib/env_dpdk/memory.o 00:07:23.334 CC lib/env_dpdk/pci.o 00:07:23.334 LIB libspdk_conf.a 00:07:23.334 CC lib/env_dpdk/init.o 00:07:23.334 LIB libspdk_rdma_utils.a 00:07:23.334 CC lib/json/json_util.o 00:07:23.334 SO libspdk_conf.so.6.0 00:07:23.334 SO libspdk_rdma_utils.so.1.0 00:07:23.334 SYMLINK libspdk_conf.so 00:07:23.334 SYMLINK libspdk_rdma_utils.so 00:07:23.334 LIB libspdk_rdma_provider.a 00:07:23.334 CC lib/json/json_write.o 00:07:23.334 CC lib/env_dpdk/threads.o 00:07:23.591 SO libspdk_rdma_provider.so.6.0 00:07:23.591 SYMLINK libspdk_rdma_provider.so 00:07:23.591 CC lib/env_dpdk/pci_ioat.o 00:07:23.591 CC lib/env_dpdk/pci_virtio.o 00:07:23.591 CC lib/env_dpdk/pci_vmd.o 00:07:23.591 LIB libspdk_idxd.a 00:07:23.591 CC lib/env_dpdk/pci_idxd.o 00:07:23.591 CC lib/env_dpdk/pci_event.o 00:07:23.849 CC lib/env_dpdk/sigbus_handler.o 00:07:23.849 SO libspdk_idxd.so.12.0 00:07:23.849 LIB libspdk_json.a 00:07:23.849 CC lib/env_dpdk/pci_dpdk.o 00:07:23.849 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:23.849 SO libspdk_json.so.6.0 00:07:23.849 SYMLINK libspdk_idxd.so 00:07:23.849 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:23.849 LIB libspdk_vmd.a 00:07:23.849 SYMLINK libspdk_json.so 00:07:23.849 SO libspdk_vmd.so.6.0 00:07:23.849 SYMLINK libspdk_vmd.so 00:07:24.164 CC lib/jsonrpc/jsonrpc_server.o 00:07:24.164 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:24.164 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:24.164 CC lib/jsonrpc/jsonrpc_client.o 00:07:24.423 LIB libspdk_jsonrpc.a 00:07:24.423 SO libspdk_jsonrpc.so.6.0 00:07:24.423 SYMLINK libspdk_jsonrpc.so 00:07:24.423 LIB libspdk_env_dpdk.a 00:07:24.682 SO libspdk_env_dpdk.so.15.0 00:07:24.682 CC lib/rpc/rpc.o 00:07:24.682 SYMLINK libspdk_env_dpdk.so 00:07:24.939 LIB libspdk_rpc.a 00:07:24.939 SO libspdk_rpc.so.6.0 00:07:24.939 SYMLINK libspdk_rpc.so 00:07:25.197 CC lib/notify/notify.o 00:07:25.197 CC lib/notify/notify_rpc.o 00:07:25.197 CC lib/keyring/keyring.o 00:07:25.197 CC lib/trace/trace.o 00:07:25.197 CC lib/keyring/keyring_rpc.o 00:07:25.197 CC lib/trace/trace_flags.o 00:07:25.197 CC lib/trace/trace_rpc.o 00:07:25.455 LIB libspdk_notify.a 00:07:25.455 SO libspdk_notify.so.6.0 00:07:25.455 LIB libspdk_keyring.a 00:07:25.455 SO libspdk_keyring.so.1.0 00:07:25.455 SYMLINK libspdk_notify.so 00:07:25.713 LIB libspdk_trace.a 00:07:25.713 SYMLINK libspdk_keyring.so 00:07:25.713 SO libspdk_trace.so.10.0 00:07:25.713 SYMLINK libspdk_trace.so 00:07:25.972 CC lib/sock/sock.o 00:07:25.972 CC lib/sock/sock_rpc.o 00:07:25.972 CC lib/thread/thread.o 00:07:25.972 CC lib/thread/iobuf.o 00:07:26.538 LIB libspdk_sock.a 00:07:26.538 SO libspdk_sock.so.10.0 00:07:26.538 SYMLINK libspdk_sock.so 00:07:26.822 CC lib/nvme/nvme_ctrlr.o 00:07:26.822 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:26.822 CC lib/nvme/nvme_ns_cmd.o 00:07:26.822 CC lib/nvme/nvme_fabric.o 00:07:26.822 CC lib/nvme/nvme_ns.o 00:07:26.822 CC lib/nvme/nvme_pcie_common.o 00:07:26.822 CC lib/nvme/nvme.o 00:07:26.822 CC lib/nvme/nvme_qpair.o 00:07:26.822 CC lib/nvme/nvme_pcie.o 00:07:27.778 LIB libspdk_thread.a 00:07:27.778 SO libspdk_thread.so.10.1 00:07:27.778 CC lib/nvme/nvme_quirks.o 00:07:27.778 CC lib/nvme/nvme_transport.o 00:07:27.778 CC lib/nvme/nvme_discovery.o 00:07:27.778 SYMLINK libspdk_thread.so 00:07:27.778 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:27.778 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:27.778 CC lib/nvme/nvme_tcp.o 00:07:27.778 CC lib/nvme/nvme_opal.o 00:07:28.036 CC lib/accel/accel.o 00:07:28.036 CC lib/nvme/nvme_io_msg.o 00:07:28.294 CC lib/nvme/nvme_poll_group.o 00:07:28.552 CC lib/blob/blobstore.o 00:07:28.552 CC lib/init/json_config.o 00:07:28.552 CC lib/init/subsystem.o 00:07:28.552 CC lib/virtio/virtio.o 00:07:28.552 CC lib/accel/accel_rpc.o 00:07:28.552 CC lib/accel/accel_sw.o 00:07:28.810 CC lib/init/subsystem_rpc.o 00:07:28.810 CC lib/nvme/nvme_zns.o 00:07:28.810 CC lib/nvme/nvme_stubs.o 00:07:28.810 CC lib/virtio/virtio_vhost_user.o 00:07:28.810 CC lib/virtio/virtio_vfio_user.o 00:07:29.069 CC lib/init/rpc.o 00:07:29.069 LIB libspdk_accel.a 00:07:29.069 CC lib/blob/request.o 00:07:29.069 SO libspdk_accel.so.16.0 00:07:29.069 CC lib/virtio/virtio_pci.o 00:07:29.069 CC lib/nvme/nvme_auth.o 00:07:29.069 LIB libspdk_init.a 00:07:29.069 SYMLINK libspdk_accel.so 00:07:29.069 CC lib/nvme/nvme_cuse.o 00:07:29.327 SO libspdk_init.so.5.0 00:07:29.327 CC lib/nvme/nvme_rdma.o 00:07:29.327 SYMLINK libspdk_init.so 00:07:29.327 CC lib/blob/zeroes.o 00:07:29.327 CC lib/blob/blob_bs_dev.o 00:07:29.327 LIB libspdk_virtio.a 00:07:29.585 SO libspdk_virtio.so.7.0 00:07:29.585 CC lib/bdev/bdev.o 00:07:29.585 CC lib/bdev/bdev_rpc.o 00:07:29.585 SYMLINK libspdk_virtio.so 00:07:29.585 CC lib/bdev/bdev_zone.o 00:07:29.585 CC lib/bdev/part.o 00:07:29.585 CC lib/event/app.o 00:07:29.585 CC lib/event/reactor.o 00:07:29.844 CC lib/bdev/scsi_nvme.o 00:07:29.844 CC lib/event/log_rpc.o 00:07:29.844 CC lib/event/app_rpc.o 00:07:29.844 CC lib/event/scheduler_static.o 00:07:30.102 LIB libspdk_event.a 00:07:30.102 SO libspdk_event.so.14.0 00:07:30.361 SYMLINK libspdk_event.so 00:07:30.619 LIB libspdk_nvme.a 00:07:30.877 SO libspdk_nvme.so.13.1 00:07:31.135 SYMLINK libspdk_nvme.so 00:07:31.394 LIB libspdk_blob.a 00:07:31.652 SO libspdk_blob.so.11.0 00:07:31.652 SYMLINK libspdk_blob.so 00:07:31.911 CC lib/blobfs/blobfs.o 00:07:31.911 CC lib/blobfs/tree.o 00:07:31.911 CC lib/lvol/lvol.o 00:07:32.169 LIB libspdk_bdev.a 00:07:32.169 SO libspdk_bdev.so.16.0 00:07:32.427 SYMLINK libspdk_bdev.so 00:07:32.427 CC lib/nbd/nbd.o 00:07:32.427 CC lib/nvmf/ctrlr.o 00:07:32.427 CC lib/nvmf/ctrlr_bdev.o 00:07:32.427 CC lib/nbd/nbd_rpc.o 00:07:32.427 CC lib/ublk/ublk.o 00:07:32.427 CC lib/scsi/dev.o 00:07:32.427 CC lib/nvmf/ctrlr_discovery.o 00:07:32.427 CC lib/ftl/ftl_core.o 00:07:32.685 CC lib/scsi/lun.o 00:07:32.685 LIB libspdk_blobfs.a 00:07:32.685 SO libspdk_blobfs.so.10.0 00:07:32.944 CC lib/scsi/port.o 00:07:32.944 SYMLINK libspdk_blobfs.so 00:07:32.944 LIB libspdk_lvol.a 00:07:32.944 CC lib/scsi/scsi.o 00:07:32.944 SO libspdk_lvol.so.10.0 00:07:32.944 CC lib/ftl/ftl_init.o 00:07:32.944 SYMLINK libspdk_lvol.so 00:07:32.944 CC lib/scsi/scsi_bdev.o 00:07:32.944 CC lib/ublk/ublk_rpc.o 00:07:32.944 LIB libspdk_nbd.a 00:07:32.944 SO libspdk_nbd.so.7.0 00:07:32.944 CC lib/scsi/scsi_pr.o 00:07:32.944 CC lib/ftl/ftl_layout.o 00:07:33.203 CC lib/ftl/ftl_debug.o 00:07:33.203 SYMLINK libspdk_nbd.so 00:07:33.204 CC lib/nvmf/subsystem.o 00:07:33.204 CC lib/nvmf/nvmf.o 00:07:33.204 CC lib/nvmf/nvmf_rpc.o 00:07:33.204 LIB libspdk_ublk.a 00:07:33.204 CC lib/nvmf/transport.o 00:07:33.204 SO libspdk_ublk.so.3.0 00:07:33.204 SYMLINK libspdk_ublk.so 00:07:33.204 CC lib/ftl/ftl_io.o 00:07:33.204 CC lib/nvmf/tcp.o 00:07:33.461 CC lib/ftl/ftl_sb.o 00:07:33.461 CC lib/nvmf/stubs.o 00:07:33.461 CC lib/scsi/scsi_rpc.o 00:07:33.461 CC lib/ftl/ftl_l2p.o 00:07:33.720 CC lib/nvmf/mdns_server.o 00:07:33.720 CC lib/scsi/task.o 00:07:33.720 CC lib/ftl/ftl_l2p_flat.o 00:07:33.720 CC lib/nvmf/rdma.o 00:07:33.978 LIB libspdk_scsi.a 00:07:33.978 CC lib/nvmf/auth.o 00:07:33.978 SO libspdk_scsi.so.9.0 00:07:33.978 CC lib/ftl/ftl_nv_cache.o 00:07:33.978 SYMLINK libspdk_scsi.so 00:07:33.978 CC lib/ftl/ftl_band.o 00:07:33.978 CC lib/ftl/ftl_band_ops.o 00:07:34.236 CC lib/iscsi/conn.o 00:07:34.236 CC lib/iscsi/init_grp.o 00:07:34.236 CC lib/vhost/vhost.o 00:07:34.236 CC lib/ftl/ftl_writer.o 00:07:34.494 CC lib/iscsi/iscsi.o 00:07:34.494 CC lib/ftl/ftl_rq.o 00:07:34.494 CC lib/iscsi/md5.o 00:07:34.495 CC lib/ftl/ftl_reloc.o 00:07:34.753 CC lib/iscsi/param.o 00:07:34.753 CC lib/iscsi/portal_grp.o 00:07:34.753 CC lib/ftl/ftl_l2p_cache.o 00:07:34.753 CC lib/iscsi/tgt_node.o 00:07:35.011 CC lib/iscsi/iscsi_subsystem.o 00:07:35.011 CC lib/iscsi/iscsi_rpc.o 00:07:35.011 CC lib/vhost/vhost_rpc.o 00:07:35.011 CC lib/vhost/vhost_scsi.o 00:07:35.011 CC lib/vhost/vhost_blk.o 00:07:35.011 CC lib/iscsi/task.o 00:07:35.270 CC lib/vhost/rte_vhost_user.o 00:07:35.270 CC lib/ftl/ftl_p2l.o 00:07:35.270 CC lib/ftl/mngt/ftl_mngt.o 00:07:35.270 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:35.270 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:35.527 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:35.527 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:35.527 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:35.527 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:35.784 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:35.784 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:35.784 LIB libspdk_iscsi.a 00:07:35.784 LIB libspdk_nvmf.a 00:07:35.784 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:35.784 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:35.784 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:35.784 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:35.784 SO libspdk_iscsi.so.8.0 00:07:36.041 CC lib/ftl/utils/ftl_conf.o 00:07:36.041 SO libspdk_nvmf.so.19.0 00:07:36.041 CC lib/ftl/utils/ftl_md.o 00:07:36.041 CC lib/ftl/utils/ftl_mempool.o 00:07:36.041 CC lib/ftl/utils/ftl_bitmap.o 00:07:36.041 CC lib/ftl/utils/ftl_property.o 00:07:36.041 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:36.041 SYMLINK libspdk_iscsi.so 00:07:36.041 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:36.299 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:36.299 SYMLINK libspdk_nvmf.so 00:07:36.299 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:36.299 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:36.299 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:36.299 LIB libspdk_vhost.a 00:07:36.299 SO libspdk_vhost.so.8.0 00:07:36.299 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:36.299 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:36.299 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:36.299 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:36.556 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:36.556 CC lib/ftl/base/ftl_base_dev.o 00:07:36.556 CC lib/ftl/base/ftl_base_bdev.o 00:07:36.556 CC lib/ftl/ftl_trace.o 00:07:36.556 SYMLINK libspdk_vhost.so 00:07:36.814 LIB libspdk_ftl.a 00:07:37.072 SO libspdk_ftl.so.9.0 00:07:37.436 SYMLINK libspdk_ftl.so 00:07:37.716 CC module/env_dpdk/env_dpdk_rpc.o 00:07:37.974 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:37.974 CC module/sock/posix/posix.o 00:07:37.974 CC module/accel/iaa/accel_iaa.o 00:07:37.974 CC module/accel/error/accel_error.o 00:07:37.974 CC module/keyring/file/keyring.o 00:07:37.974 CC module/accel/dsa/accel_dsa.o 00:07:37.974 CC module/accel/ioat/accel_ioat.o 00:07:37.974 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:37.974 CC module/blob/bdev/blob_bdev.o 00:07:37.974 LIB libspdk_env_dpdk_rpc.a 00:07:37.974 SO libspdk_env_dpdk_rpc.so.6.0 00:07:37.974 SYMLINK libspdk_env_dpdk_rpc.so 00:07:37.974 CC module/keyring/file/keyring_rpc.o 00:07:37.974 CC module/accel/error/accel_error_rpc.o 00:07:37.974 CC module/accel/ioat/accel_ioat_rpc.o 00:07:38.232 LIB libspdk_scheduler_dynamic.a 00:07:38.232 LIB libspdk_scheduler_dpdk_governor.a 00:07:38.232 SO libspdk_scheduler_dynamic.so.4.0 00:07:38.232 CC module/accel/dsa/accel_dsa_rpc.o 00:07:38.232 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:38.232 CC module/accel/iaa/accel_iaa_rpc.o 00:07:38.232 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:38.232 SYMLINK libspdk_scheduler_dynamic.so 00:07:38.232 LIB libspdk_keyring_file.a 00:07:38.232 LIB libspdk_accel_error.a 00:07:38.232 LIB libspdk_blob_bdev.a 00:07:38.232 SO libspdk_keyring_file.so.1.0 00:07:38.232 SO libspdk_accel_error.so.2.0 00:07:38.232 LIB libspdk_accel_ioat.a 00:07:38.232 SO libspdk_blob_bdev.so.11.0 00:07:38.232 SO libspdk_accel_ioat.so.6.0 00:07:38.232 LIB libspdk_accel_dsa.a 00:07:38.232 SYMLINK libspdk_blob_bdev.so 00:07:38.232 SYMLINK libspdk_accel_error.so 00:07:38.232 SYMLINK libspdk_keyring_file.so 00:07:38.232 LIB libspdk_accel_iaa.a 00:07:38.490 SO libspdk_accel_dsa.so.5.0 00:07:38.490 CC module/keyring/linux/keyring.o 00:07:38.490 CC module/keyring/linux/keyring_rpc.o 00:07:38.490 SYMLINK libspdk_accel_ioat.so 00:07:38.490 SO libspdk_accel_iaa.so.3.0 00:07:38.490 CC module/scheduler/gscheduler/gscheduler.o 00:07:38.490 SYMLINK libspdk_accel_dsa.so 00:07:38.490 SYMLINK libspdk_accel_iaa.so 00:07:38.490 LIB libspdk_keyring_linux.a 00:07:38.748 SO libspdk_keyring_linux.so.1.0 00:07:38.748 LIB libspdk_scheduler_gscheduler.a 00:07:38.748 CC module/blobfs/bdev/blobfs_bdev.o 00:07:38.748 CC module/bdev/delay/vbdev_delay.o 00:07:38.748 CC module/bdev/lvol/vbdev_lvol.o 00:07:38.748 SO libspdk_scheduler_gscheduler.so.4.0 00:07:38.748 CC module/bdev/error/vbdev_error.o 00:07:38.748 CC module/bdev/gpt/gpt.o 00:07:38.748 CC module/bdev/malloc/bdev_malloc.o 00:07:38.748 SYMLINK libspdk_keyring_linux.so 00:07:38.748 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:38.748 CC module/bdev/null/bdev_null.o 00:07:38.748 SYMLINK libspdk_scheduler_gscheduler.so 00:07:38.748 CC module/bdev/gpt/vbdev_gpt.o 00:07:38.748 LIB libspdk_sock_posix.a 00:07:38.748 SO libspdk_sock_posix.so.6.0 00:07:39.006 CC module/bdev/null/bdev_null_rpc.o 00:07:39.006 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:39.006 SYMLINK libspdk_sock_posix.so 00:07:39.006 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:39.006 CC module/bdev/error/vbdev_error_rpc.o 00:07:39.006 LIB libspdk_bdev_gpt.a 00:07:39.006 LIB libspdk_bdev_null.a 00:07:39.006 LIB libspdk_bdev_malloc.a 00:07:39.006 SO libspdk_bdev_gpt.so.6.0 00:07:39.006 SO libspdk_bdev_null.so.6.0 00:07:39.006 LIB libspdk_bdev_delay.a 00:07:39.264 SO libspdk_bdev_malloc.so.6.0 00:07:39.264 CC module/bdev/nvme/bdev_nvme.o 00:07:39.264 LIB libspdk_bdev_error.a 00:07:39.264 SO libspdk_bdev_delay.so.6.0 00:07:39.264 LIB libspdk_blobfs_bdev.a 00:07:39.264 SYMLINK libspdk_bdev_null.so 00:07:39.264 SYMLINK libspdk_bdev_gpt.so 00:07:39.264 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:39.264 SO libspdk_bdev_error.so.6.0 00:07:39.264 CC module/bdev/passthru/vbdev_passthru.o 00:07:39.264 SO libspdk_blobfs_bdev.so.6.0 00:07:39.264 SYMLINK libspdk_bdev_malloc.so 00:07:39.264 SYMLINK libspdk_bdev_delay.so 00:07:39.264 SYMLINK libspdk_bdev_error.so 00:07:39.264 CC module/bdev/raid/bdev_raid.o 00:07:39.264 SYMLINK libspdk_blobfs_bdev.so 00:07:39.522 CC module/bdev/split/vbdev_split.o 00:07:39.522 CC module/bdev/ftl/bdev_ftl.o 00:07:39.522 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:39.522 CC module/bdev/aio/bdev_aio.o 00:07:39.522 CC module/bdev/iscsi/bdev_iscsi.o 00:07:39.522 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:39.522 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:39.522 LIB libspdk_bdev_lvol.a 00:07:39.522 SO libspdk_bdev_lvol.so.6.0 00:07:39.780 SYMLINK libspdk_bdev_lvol.so 00:07:39.780 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:39.780 CC module/bdev/aio/bdev_aio_rpc.o 00:07:39.780 CC module/bdev/split/vbdev_split_rpc.o 00:07:39.780 LIB libspdk_bdev_passthru.a 00:07:39.780 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:39.780 SO libspdk_bdev_passthru.so.6.0 00:07:39.780 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:39.780 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:39.780 SYMLINK libspdk_bdev_passthru.so 00:07:39.780 CC module/bdev/nvme/nvme_rpc.o 00:07:39.780 LIB libspdk_bdev_ftl.a 00:07:40.038 LIB libspdk_bdev_aio.a 00:07:40.038 LIB libspdk_bdev_zone_block.a 00:07:40.038 SO libspdk_bdev_ftl.so.6.0 00:07:40.038 SO libspdk_bdev_aio.so.6.0 00:07:40.038 LIB libspdk_bdev_split.a 00:07:40.038 SO libspdk_bdev_zone_block.so.6.0 00:07:40.038 SO libspdk_bdev_split.so.6.0 00:07:40.038 SYMLINK libspdk_bdev_ftl.so 00:07:40.038 LIB libspdk_bdev_iscsi.a 00:07:40.038 CC module/bdev/raid/bdev_raid_rpc.o 00:07:40.038 SYMLINK libspdk_bdev_aio.so 00:07:40.038 CC module/bdev/nvme/bdev_mdns_client.o 00:07:40.038 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:40.038 SYMLINK libspdk_bdev_zone_block.so 00:07:40.038 SYMLINK libspdk_bdev_split.so 00:07:40.038 SO libspdk_bdev_iscsi.so.6.0 00:07:40.038 CC module/bdev/nvme/vbdev_opal.o 00:07:40.038 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:40.038 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:40.038 SYMLINK libspdk_bdev_iscsi.so 00:07:40.038 CC module/bdev/raid/bdev_raid_sb.o 00:07:40.296 CC module/bdev/raid/raid0.o 00:07:40.296 CC module/bdev/raid/raid1.o 00:07:40.296 CC module/bdev/raid/concat.o 00:07:40.296 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:40.554 LIB libspdk_bdev_virtio.a 00:07:40.554 LIB libspdk_bdev_raid.a 00:07:40.554 SO libspdk_bdev_virtio.so.6.0 00:07:40.554 SO libspdk_bdev_raid.so.6.0 00:07:40.812 SYMLINK libspdk_bdev_virtio.so 00:07:40.812 SYMLINK libspdk_bdev_raid.so 00:07:41.379 LIB libspdk_bdev_nvme.a 00:07:41.637 SO libspdk_bdev_nvme.so.7.0 00:07:41.637 SYMLINK libspdk_bdev_nvme.so 00:07:42.229 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:42.230 CC module/event/subsystems/iobuf/iobuf.o 00:07:42.230 CC module/event/subsystems/sock/sock.o 00:07:42.230 CC module/event/subsystems/keyring/keyring.o 00:07:42.230 CC module/event/subsystems/scheduler/scheduler.o 00:07:42.230 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:42.230 CC module/event/subsystems/vmd/vmd.o 00:07:42.230 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:42.488 LIB libspdk_event_scheduler.a 00:07:42.488 LIB libspdk_event_iobuf.a 00:07:42.488 LIB libspdk_event_keyring.a 00:07:42.488 LIB libspdk_event_vmd.a 00:07:42.488 SO libspdk_event_keyring.so.1.0 00:07:42.488 SO libspdk_event_scheduler.so.4.0 00:07:42.488 SO libspdk_event_iobuf.so.3.0 00:07:42.488 SO libspdk_event_vmd.so.6.0 00:07:42.488 LIB libspdk_event_sock.a 00:07:42.488 LIB libspdk_event_vhost_blk.a 00:07:42.488 SO libspdk_event_sock.so.5.0 00:07:42.488 SO libspdk_event_vhost_blk.so.3.0 00:07:42.488 SYMLINK libspdk_event_iobuf.so 00:07:42.488 SYMLINK libspdk_event_scheduler.so 00:07:42.488 SYMLINK libspdk_event_keyring.so 00:07:42.488 SYMLINK libspdk_event_vmd.so 00:07:42.488 SYMLINK libspdk_event_sock.so 00:07:42.488 SYMLINK libspdk_event_vhost_blk.so 00:07:42.746 CC module/event/subsystems/accel/accel.o 00:07:43.004 LIB libspdk_event_accel.a 00:07:43.004 SO libspdk_event_accel.so.6.0 00:07:43.004 SYMLINK libspdk_event_accel.so 00:07:43.262 CC module/event/subsystems/bdev/bdev.o 00:07:43.521 LIB libspdk_event_bdev.a 00:07:43.521 SO libspdk_event_bdev.so.6.0 00:07:43.779 SYMLINK libspdk_event_bdev.so 00:07:43.779 CC module/event/subsystems/ublk/ublk.o 00:07:43.779 CC module/event/subsystems/nbd/nbd.o 00:07:43.779 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:43.779 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:43.779 CC module/event/subsystems/scsi/scsi.o 00:07:44.037 LIB libspdk_event_nbd.a 00:07:44.037 LIB libspdk_event_ublk.a 00:07:44.037 SO libspdk_event_nbd.so.6.0 00:07:44.037 SO libspdk_event_ublk.so.3.0 00:07:44.037 LIB libspdk_event_scsi.a 00:07:44.037 SO libspdk_event_scsi.so.6.0 00:07:44.295 SYMLINK libspdk_event_nbd.so 00:07:44.295 SYMLINK libspdk_event_ublk.so 00:07:44.295 LIB libspdk_event_nvmf.a 00:07:44.295 SYMLINK libspdk_event_scsi.so 00:07:44.295 SO libspdk_event_nvmf.so.6.0 00:07:44.295 SYMLINK libspdk_event_nvmf.so 00:07:44.554 CC module/event/subsystems/iscsi/iscsi.o 00:07:44.554 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:44.554 LIB libspdk_event_vhost_scsi.a 00:07:44.554 LIB libspdk_event_iscsi.a 00:07:44.554 SO libspdk_event_vhost_scsi.so.3.0 00:07:44.812 SO libspdk_event_iscsi.so.6.0 00:07:44.812 SYMLINK libspdk_event_vhost_scsi.so 00:07:44.812 SYMLINK libspdk_event_iscsi.so 00:07:44.812 SO libspdk.so.6.0 00:07:44.812 SYMLINK libspdk.so 00:07:45.079 CC app/trace_record/trace_record.o 00:07:45.079 CXX app/trace/trace.o 00:07:45.079 TEST_HEADER include/spdk/accel.h 00:07:45.079 TEST_HEADER include/spdk/accel_module.h 00:07:45.079 TEST_HEADER include/spdk/assert.h 00:07:45.079 TEST_HEADER include/spdk/barrier.h 00:07:45.079 TEST_HEADER include/spdk/base64.h 00:07:45.079 TEST_HEADER include/spdk/bdev.h 00:07:45.079 TEST_HEADER include/spdk/bdev_module.h 00:07:45.079 TEST_HEADER include/spdk/bdev_zone.h 00:07:45.079 TEST_HEADER include/spdk/bit_array.h 00:07:45.351 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:45.351 TEST_HEADER include/spdk/bit_pool.h 00:07:45.351 TEST_HEADER include/spdk/blob_bdev.h 00:07:45.351 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:45.351 TEST_HEADER include/spdk/blobfs.h 00:07:45.351 TEST_HEADER include/spdk/blob.h 00:07:45.351 TEST_HEADER include/spdk/conf.h 00:07:45.351 TEST_HEADER include/spdk/config.h 00:07:45.351 TEST_HEADER include/spdk/cpuset.h 00:07:45.351 TEST_HEADER include/spdk/crc16.h 00:07:45.351 TEST_HEADER include/spdk/crc32.h 00:07:45.351 TEST_HEADER include/spdk/crc64.h 00:07:45.351 TEST_HEADER include/spdk/dif.h 00:07:45.351 TEST_HEADER include/spdk/dma.h 00:07:45.351 TEST_HEADER include/spdk/endian.h 00:07:45.351 TEST_HEADER include/spdk/env_dpdk.h 00:07:45.351 TEST_HEADER include/spdk/env.h 00:07:45.351 TEST_HEADER include/spdk/event.h 00:07:45.351 TEST_HEADER include/spdk/fd_group.h 00:07:45.351 TEST_HEADER include/spdk/fd.h 00:07:45.352 TEST_HEADER include/spdk/file.h 00:07:45.352 CC examples/util/zipf/zipf.o 00:07:45.352 TEST_HEADER include/spdk/ftl.h 00:07:45.352 TEST_HEADER include/spdk/gpt_spec.h 00:07:45.352 TEST_HEADER include/spdk/hexlify.h 00:07:45.352 CC examples/ioat/perf/perf.o 00:07:45.352 TEST_HEADER include/spdk/histogram_data.h 00:07:45.352 TEST_HEADER include/spdk/idxd.h 00:07:45.352 TEST_HEADER include/spdk/idxd_spec.h 00:07:45.352 TEST_HEADER include/spdk/init.h 00:07:45.352 TEST_HEADER include/spdk/ioat.h 00:07:45.352 TEST_HEADER include/spdk/ioat_spec.h 00:07:45.352 TEST_HEADER include/spdk/iscsi_spec.h 00:07:45.352 TEST_HEADER include/spdk/json.h 00:07:45.352 CC test/thread/poller_perf/poller_perf.o 00:07:45.352 TEST_HEADER include/spdk/jsonrpc.h 00:07:45.352 TEST_HEADER include/spdk/keyring.h 00:07:45.352 TEST_HEADER include/spdk/keyring_module.h 00:07:45.352 TEST_HEADER include/spdk/likely.h 00:07:45.352 TEST_HEADER include/spdk/log.h 00:07:45.352 TEST_HEADER include/spdk/lvol.h 00:07:45.352 TEST_HEADER include/spdk/memory.h 00:07:45.352 TEST_HEADER include/spdk/mmio.h 00:07:45.352 TEST_HEADER include/spdk/nbd.h 00:07:45.352 TEST_HEADER include/spdk/net.h 00:07:45.352 TEST_HEADER include/spdk/notify.h 00:07:45.352 TEST_HEADER include/spdk/nvme.h 00:07:45.352 TEST_HEADER include/spdk/nvme_intel.h 00:07:45.352 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:45.352 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:45.352 CC test/dma/test_dma/test_dma.o 00:07:45.352 TEST_HEADER include/spdk/nvme_spec.h 00:07:45.352 TEST_HEADER include/spdk/nvme_zns.h 00:07:45.352 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:45.352 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:45.352 CC test/app/bdev_svc/bdev_svc.o 00:07:45.352 TEST_HEADER include/spdk/nvmf.h 00:07:45.352 TEST_HEADER include/spdk/nvmf_spec.h 00:07:45.352 TEST_HEADER include/spdk/nvmf_transport.h 00:07:45.352 TEST_HEADER include/spdk/opal.h 00:07:45.352 TEST_HEADER include/spdk/opal_spec.h 00:07:45.352 TEST_HEADER include/spdk/pci_ids.h 00:07:45.352 TEST_HEADER include/spdk/pipe.h 00:07:45.352 TEST_HEADER include/spdk/queue.h 00:07:45.352 TEST_HEADER include/spdk/reduce.h 00:07:45.352 TEST_HEADER include/spdk/rpc.h 00:07:45.352 TEST_HEADER include/spdk/scheduler.h 00:07:45.352 TEST_HEADER include/spdk/scsi.h 00:07:45.352 TEST_HEADER include/spdk/scsi_spec.h 00:07:45.352 TEST_HEADER include/spdk/sock.h 00:07:45.352 TEST_HEADER include/spdk/stdinc.h 00:07:45.352 TEST_HEADER include/spdk/string.h 00:07:45.352 TEST_HEADER include/spdk/thread.h 00:07:45.352 TEST_HEADER include/spdk/trace.h 00:07:45.352 TEST_HEADER include/spdk/trace_parser.h 00:07:45.352 TEST_HEADER include/spdk/tree.h 00:07:45.352 TEST_HEADER include/spdk/ublk.h 00:07:45.352 TEST_HEADER include/spdk/util.h 00:07:45.352 TEST_HEADER include/spdk/uuid.h 00:07:45.352 TEST_HEADER include/spdk/version.h 00:07:45.352 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:45.352 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:45.352 TEST_HEADER include/spdk/vhost.h 00:07:45.352 TEST_HEADER include/spdk/vmd.h 00:07:45.352 TEST_HEADER include/spdk/xor.h 00:07:45.352 TEST_HEADER include/spdk/zipf.h 00:07:45.352 CXX test/cpp_headers/accel.o 00:07:45.610 LINK interrupt_tgt 00:07:45.610 LINK zipf 00:07:45.610 LINK spdk_trace_record 00:07:45.610 LINK poller_perf 00:07:45.610 LINK bdev_svc 00:07:45.610 LINK ioat_perf 00:07:45.610 CXX test/cpp_headers/accel_module.o 00:07:45.869 LINK spdk_trace 00:07:45.869 LINK test_dma 00:07:45.869 CC test/rpc_client/rpc_client_test.o 00:07:45.869 CXX test/cpp_headers/assert.o 00:07:46.127 CC examples/ioat/verify/verify.o 00:07:46.127 CC test/event/event_perf/event_perf.o 00:07:46.127 CC examples/thread/thread/thread_ex.o 00:07:46.127 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:46.127 CC test/env/mem_callbacks/mem_callbacks.o 00:07:46.127 CXX test/cpp_headers/barrier.o 00:07:46.127 LINK rpc_client_test 00:07:46.127 LINK event_perf 00:07:46.385 LINK verify 00:07:46.385 CC app/nvmf_tgt/nvmf_main.o 00:07:46.385 CXX test/cpp_headers/base64.o 00:07:46.385 CC test/env/vtophys/vtophys.o 00:07:46.385 LINK thread 00:07:46.642 LINK nvmf_tgt 00:07:46.642 CC test/event/reactor/reactor.o 00:07:46.643 LINK nvme_fuzz 00:07:46.643 LINK vtophys 00:07:46.643 CC test/event/reactor_perf/reactor_perf.o 00:07:46.643 CXX test/cpp_headers/bdev.o 00:07:46.901 LINK reactor 00:07:46.901 CC app/iscsi_tgt/iscsi_tgt.o 00:07:46.901 LINK reactor_perf 00:07:47.159 CC app/spdk_tgt/spdk_tgt.o 00:07:47.159 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:47.159 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:47.159 CC app/spdk_lspci/spdk_lspci.o 00:07:47.159 CXX test/cpp_headers/bdev_module.o 00:07:47.159 CC app/spdk_nvme_perf/perf.o 00:07:47.159 LINK mem_callbacks 00:07:47.159 LINK iscsi_tgt 00:07:47.159 LINK spdk_lspci 00:07:47.415 LINK env_dpdk_post_init 00:07:47.415 CXX test/cpp_headers/bdev_zone.o 00:07:47.415 CC test/event/app_repeat/app_repeat.o 00:07:47.415 LINK spdk_tgt 00:07:47.673 CXX test/cpp_headers/bit_array.o 00:07:47.673 CC test/event/scheduler/scheduler.o 00:07:47.673 CC test/env/memory/memory_ut.o 00:07:47.673 LINK app_repeat 00:07:47.673 CC test/env/pci/pci_ut.o 00:07:47.673 CXX test/cpp_headers/bit_pool.o 00:07:47.673 CC app/spdk_nvme_identify/identify.o 00:07:47.931 CC app/spdk_nvme_discover/discovery_aer.o 00:07:47.931 CXX test/cpp_headers/blob_bdev.o 00:07:47.931 LINK scheduler 00:07:48.190 CC app/spdk_top/spdk_top.o 00:07:48.190 LINK spdk_nvme_perf 00:07:48.190 CXX test/cpp_headers/blobfs_bdev.o 00:07:48.448 LINK spdk_nvme_discover 00:07:48.448 LINK pci_ut 00:07:48.448 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:48.706 CC test/app/histogram_perf/histogram_perf.o 00:07:48.706 CXX test/cpp_headers/blobfs.o 00:07:48.706 CC test/app/jsoncat/jsoncat.o 00:07:48.706 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:48.963 LINK histogram_perf 00:07:48.963 CXX test/cpp_headers/blob.o 00:07:48.963 LINK iscsi_fuzz 00:07:48.963 LINK spdk_nvme_identify 00:07:48.963 LINK jsoncat 00:07:48.963 LINK memory_ut 00:07:48.963 CC test/app/stub/stub.o 00:07:49.220 CXX test/cpp_headers/conf.o 00:07:49.478 LINK stub 00:07:49.478 CXX test/cpp_headers/config.o 00:07:49.478 CC app/vhost/vhost.o 00:07:49.478 CXX test/cpp_headers/cpuset.o 00:07:49.478 CC app/spdk_dd/spdk_dd.o 00:07:49.478 LINK spdk_top 00:07:49.478 LINK vhost_fuzz 00:07:49.478 CC test/accel/dif/dif.o 00:07:49.736 CC app/fio/nvme/fio_plugin.o 00:07:49.736 CC test/blobfs/mkfs/mkfs.o 00:07:49.736 LINK vhost 00:07:49.736 CXX test/cpp_headers/crc16.o 00:07:49.993 CC app/fio/bdev/fio_plugin.o 00:07:49.994 LINK spdk_dd 00:07:49.994 CXX test/cpp_headers/crc32.o 00:07:49.994 CC test/nvme/aer/aer.o 00:07:49.994 LINK mkfs 00:07:49.994 CC test/lvol/esnap/esnap.o 00:07:50.252 CC test/nvme/reset/reset.o 00:07:50.252 CXX test/cpp_headers/crc64.o 00:07:50.252 LINK dif 00:07:50.509 CC test/nvme/sgl/sgl.o 00:07:50.509 LINK aer 00:07:50.509 CC test/nvme/e2edp/nvme_dp.o 00:07:50.767 LINK spdk_nvme 00:07:50.767 CXX test/cpp_headers/dif.o 00:07:50.767 LINK spdk_bdev 00:07:50.767 CXX test/cpp_headers/dma.o 00:07:50.767 LINK reset 00:07:51.025 LINK sgl 00:07:51.025 CC test/nvme/overhead/overhead.o 00:07:51.025 LINK nvme_dp 00:07:51.025 CXX test/cpp_headers/endian.o 00:07:51.025 CC test/nvme/err_injection/err_injection.o 00:07:51.283 CC examples/sock/hello_world/hello_sock.o 00:07:51.283 CC test/bdev/bdevio/bdevio.o 00:07:51.283 CC test/nvme/startup/startup.o 00:07:51.541 LINK err_injection 00:07:51.541 CXX test/cpp_headers/env_dpdk.o 00:07:51.541 CC examples/vmd/lsvmd/lsvmd.o 00:07:51.541 LINK overhead 00:07:51.541 LINK hello_sock 00:07:51.541 LINK startup 00:07:51.541 CC examples/idxd/perf/perf.o 00:07:51.541 CXX test/cpp_headers/env.o 00:07:51.799 LINK lsvmd 00:07:51.799 LINK bdevio 00:07:51.799 CC test/nvme/reserve/reserve.o 00:07:51.799 CXX test/cpp_headers/event.o 00:07:52.057 LINK idxd_perf 00:07:52.057 CC examples/accel/perf/accel_perf.o 00:07:52.057 CC examples/vmd/led/led.o 00:07:52.057 CXX test/cpp_headers/fd_group.o 00:07:52.057 CC examples/blob/hello_world/hello_blob.o 00:07:52.057 CC examples/nvme/hello_world/hello_world.o 00:07:52.057 LINK reserve 00:07:52.315 LINK led 00:07:52.315 CC examples/nvme/reconnect/reconnect.o 00:07:52.315 CXX test/cpp_headers/fd.o 00:07:52.315 CC examples/blob/cli/blobcli.o 00:07:52.315 LINK hello_world 00:07:52.315 LINK hello_blob 00:07:52.573 CXX test/cpp_headers/file.o 00:07:52.573 CC test/nvme/simple_copy/simple_copy.o 00:07:52.573 CC test/nvme/connect_stress/connect_stress.o 00:07:52.573 LINK accel_perf 00:07:52.573 LINK reconnect 00:07:52.573 CXX test/cpp_headers/ftl.o 00:07:52.573 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:52.831 LINK connect_stress 00:07:52.831 LINK simple_copy 00:07:52.831 CC examples/nvme/arbitration/arbitration.o 00:07:52.831 CXX test/cpp_headers/gpt_spec.o 00:07:52.831 CC examples/nvme/hotplug/hotplug.o 00:07:53.089 LINK blobcli 00:07:53.089 CC test/nvme/boot_partition/boot_partition.o 00:07:53.089 CC examples/bdev/hello_world/hello_bdev.o 00:07:53.089 CC examples/bdev/bdevperf/bdevperf.o 00:07:53.089 CXX test/cpp_headers/hexlify.o 00:07:53.089 LINK arbitration 00:07:53.089 LINK hotplug 00:07:53.089 LINK boot_partition 00:07:53.347 CXX test/cpp_headers/histogram_data.o 00:07:53.347 LINK hello_bdev 00:07:53.347 LINK nvme_manage 00:07:53.347 CXX test/cpp_headers/idxd.o 00:07:53.347 CXX test/cpp_headers/idxd_spec.o 00:07:53.347 CC test/nvme/compliance/nvme_compliance.o 00:07:53.605 CXX test/cpp_headers/init.o 00:07:53.605 CXX test/cpp_headers/ioat.o 00:07:53.605 CXX test/cpp_headers/ioat_spec.o 00:07:53.605 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:53.605 CC examples/nvme/abort/abort.o 00:07:53.605 CXX test/cpp_headers/iscsi_spec.o 00:07:53.863 CXX test/cpp_headers/json.o 00:07:53.863 CC test/nvme/fused_ordering/fused_ordering.o 00:07:53.863 LINK nvme_compliance 00:07:53.863 LINK cmb_copy 00:07:53.863 LINK bdevperf 00:07:53.863 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:53.863 CXX test/cpp_headers/jsonrpc.o 00:07:53.863 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:54.121 LINK abort 00:07:54.121 CXX test/cpp_headers/keyring.o 00:07:54.121 LINK pmr_persistence 00:07:54.121 CC test/nvme/fdp/fdp.o 00:07:54.121 CXX test/cpp_headers/keyring_module.o 00:07:54.121 LINK fused_ordering 00:07:54.121 CC test/nvme/cuse/cuse.o 00:07:54.121 CXX test/cpp_headers/likely.o 00:07:54.121 CXX test/cpp_headers/log.o 00:07:54.379 LINK doorbell_aers 00:07:54.379 CXX test/cpp_headers/lvol.o 00:07:54.379 CXX test/cpp_headers/memory.o 00:07:54.379 CXX test/cpp_headers/mmio.o 00:07:54.379 CXX test/cpp_headers/nbd.o 00:07:54.379 CXX test/cpp_headers/net.o 00:07:54.379 CXX test/cpp_headers/notify.o 00:07:54.379 CXX test/cpp_headers/nvme.o 00:07:54.379 CXX test/cpp_headers/nvme_intel.o 00:07:54.379 CXX test/cpp_headers/nvme_ocssd.o 00:07:54.379 LINK fdp 00:07:54.379 CC examples/nvmf/nvmf/nvmf.o 00:07:54.379 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:54.637 CXX test/cpp_headers/nvme_spec.o 00:07:54.637 CXX test/cpp_headers/nvme_zns.o 00:07:54.637 CXX test/cpp_headers/nvmf_cmd.o 00:07:54.637 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:54.637 CXX test/cpp_headers/nvmf.o 00:07:54.637 CXX test/cpp_headers/nvmf_spec.o 00:07:54.637 CXX test/cpp_headers/nvmf_transport.o 00:07:54.895 CXX test/cpp_headers/opal.o 00:07:54.895 LINK nvmf 00:07:54.895 CXX test/cpp_headers/opal_spec.o 00:07:54.895 CXX test/cpp_headers/pci_ids.o 00:07:54.895 CXX test/cpp_headers/pipe.o 00:07:54.895 CXX test/cpp_headers/queue.o 00:07:54.895 CXX test/cpp_headers/reduce.o 00:07:54.895 CXX test/cpp_headers/rpc.o 00:07:54.895 CXX test/cpp_headers/scheduler.o 00:07:54.895 CXX test/cpp_headers/scsi.o 00:07:54.895 CXX test/cpp_headers/scsi_spec.o 00:07:55.154 CXX test/cpp_headers/sock.o 00:07:55.154 CXX test/cpp_headers/stdinc.o 00:07:55.154 CXX test/cpp_headers/string.o 00:07:55.154 CXX test/cpp_headers/thread.o 00:07:55.154 CXX test/cpp_headers/trace.o 00:07:55.154 CXX test/cpp_headers/trace_parser.o 00:07:55.154 CXX test/cpp_headers/tree.o 00:07:55.154 CXX test/cpp_headers/ublk.o 00:07:55.154 CXX test/cpp_headers/util.o 00:07:55.154 CXX test/cpp_headers/uuid.o 00:07:55.154 CXX test/cpp_headers/version.o 00:07:55.154 CXX test/cpp_headers/vfio_user_pci.o 00:07:55.154 CXX test/cpp_headers/vfio_user_spec.o 00:07:55.154 CXX test/cpp_headers/vhost.o 00:07:55.413 CXX test/cpp_headers/vmd.o 00:07:55.413 CXX test/cpp_headers/xor.o 00:07:55.413 CXX test/cpp_headers/zipf.o 00:07:55.413 LINK cuse 00:07:55.979 LINK esnap 00:07:56.545 00:07:56.545 real 1m9.570s 00:07:56.545 user 7m16.873s 00:07:56.545 sys 1m43.811s 00:07:56.545 17:08:42 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:07:56.545 17:08:42 make -- common/autotest_common.sh@10 -- $ set +x 00:07:56.545 ************************************ 00:07:56.545 END TEST make 00:07:56.545 ************************************ 00:07:56.545 17:08:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:56.545 17:08:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:56.545 17:08:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:56.545 17:08:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:56.545 17:08:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:07:56.545 17:08:42 -- pm/common@44 -- $ pid=5308 00:07:56.545 17:08:42 -- pm/common@50 -- $ kill -TERM 5308 00:07:56.545 17:08:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:56.545 17:08:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:07:56.545 17:08:42 -- pm/common@44 -- $ pid=5310 00:07:56.545 17:08:42 -- pm/common@50 -- $ kill -TERM 5310 00:07:56.545 17:08:42 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:56.545 17:08:42 -- nvmf/common.sh@7 -- # uname -s 00:07:56.545 17:08:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.545 17:08:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.545 17:08:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.545 17:08:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.545 17:08:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.545 17:08:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.545 17:08:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.545 17:08:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.545 17:08:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.545 17:08:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.545 17:08:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:07:56.545 17:08:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:07:56.545 17:08:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.545 17:08:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.545 17:08:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:56.545 17:08:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.545 17:08:42 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:56.545 17:08:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.545 17:08:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.545 17:08:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.545 17:08:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.545 17:08:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.545 17:08:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.545 17:08:42 -- paths/export.sh@5 -- # export PATH 00:07:56.545 17:08:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.545 17:08:42 -- nvmf/common.sh@47 -- # : 0 00:07:56.545 17:08:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:56.545 17:08:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:56.545 17:08:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.545 17:08:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.545 17:08:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.545 17:08:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:56.545 17:08:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:56.545 17:08:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:56.545 17:08:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:56.546 17:08:42 -- spdk/autotest.sh@32 -- # uname -s 00:07:56.546 17:08:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:56.546 17:08:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:56.546 17:08:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:56.546 17:08:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:07:56.546 17:08:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:07:56.546 17:08:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:56.804 17:08:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:56.804 17:08:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:56.804 17:08:42 -- spdk/autotest.sh@48 -- # udevadm_pid=54705 00:07:56.804 17:08:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:56.804 17:08:42 -- pm/common@17 -- # local monitor 00:07:56.804 17:08:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:56.804 17:08:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:56.804 17:08:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:56.804 17:08:42 -- pm/common@25 -- # sleep 1 00:07:56.804 17:08:42 -- pm/common@21 -- # date +%s 00:07:56.804 17:08:42 -- pm/common@21 -- # date +%s 00:07:56.804 17:08:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721840922 00:07:56.804 17:08:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721840922 00:07:56.804 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721840922_collect-vmstat.pm.log 00:07:56.804 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721840922_collect-cpu-load.pm.log 00:07:57.737 17:08:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:57.737 17:08:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:57.737 17:08:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:57.737 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:07:57.737 17:08:43 -- spdk/autotest.sh@59 -- # create_test_list 00:07:57.737 17:08:43 -- common/autotest_common.sh@748 -- # xtrace_disable 00:07:57.737 17:08:43 -- common/autotest_common.sh@10 -- # set +x 00:07:57.737 17:08:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:07:57.737 17:08:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:07:57.737 17:08:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:07:57.737 17:08:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:07:57.737 17:08:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:07:57.737 17:08:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:57.737 17:08:43 -- common/autotest_common.sh@1455 -- # uname 00:07:57.737 17:08:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:07:57.737 17:08:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:57.737 17:08:43 -- common/autotest_common.sh@1475 -- # uname 00:07:57.737 17:08:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:07:57.737 17:08:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:07:57.737 17:08:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:07:57.737 17:08:43 -- spdk/autotest.sh@72 -- # hash lcov 00:07:57.737 17:08:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:07:57.737 17:08:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:07:57.737 --rc lcov_branch_coverage=1 00:07:57.737 --rc lcov_function_coverage=1 00:07:57.737 --rc genhtml_branch_coverage=1 00:07:57.737 --rc genhtml_function_coverage=1 00:07:57.737 --rc genhtml_legend=1 00:07:57.737 --rc geninfo_all_blocks=1 00:07:57.737 ' 00:07:57.737 17:08:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:07:57.737 --rc lcov_branch_coverage=1 00:07:57.737 --rc lcov_function_coverage=1 00:07:57.737 --rc genhtml_branch_coverage=1 00:07:57.737 --rc genhtml_function_coverage=1 00:07:57.737 --rc genhtml_legend=1 00:07:57.737 --rc geninfo_all_blocks=1 00:07:57.737 ' 00:07:57.737 17:08:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:07:57.737 --rc lcov_branch_coverage=1 00:07:57.737 --rc lcov_function_coverage=1 00:07:57.737 --rc genhtml_branch_coverage=1 00:07:57.737 --rc genhtml_function_coverage=1 00:07:57.737 --rc genhtml_legend=1 00:07:57.737 --rc geninfo_all_blocks=1 00:07:57.737 --no-external' 00:07:57.737 17:08:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:07:57.737 --rc lcov_branch_coverage=1 00:07:57.737 --rc lcov_function_coverage=1 00:07:57.737 --rc genhtml_branch_coverage=1 00:07:57.737 --rc genhtml_function_coverage=1 00:07:57.737 --rc genhtml_legend=1 00:07:57.737 --rc geninfo_all_blocks=1 00:07:57.737 --no-external' 00:07:57.737 17:08:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:07:57.995 lcov: LCOV version 1.14 00:07:57.995 17:08:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:08:16.072 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:16.072 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/net.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:08:28.267 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:08:28.267 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:08:28.268 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:08:28.268 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:08:28.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:08:28.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:08:28.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:08:28.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:08:28.526 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:08:28.526 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:08:28.527 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:08:28.527 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:08:31.809 17:09:18 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:08:31.809 17:09:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:31.809 17:09:18 -- common/autotest_common.sh@10 -- # set +x 00:08:32.067 17:09:18 -- spdk/autotest.sh@91 -- # rm -f 00:08:32.067 17:09:18 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:32.633 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:32.633 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:08:32.633 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:08:32.633 17:09:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:08:32.633 17:09:18 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:32.633 17:09:18 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:32.633 17:09:18 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:32.633 17:09:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.633 17:09:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:32.633 17:09:18 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:32.633 17:09:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.633 17:09:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:32.633 17:09:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:32.633 17:09:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.633 17:09:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:08:32.633 17:09:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:08:32.633 17:09:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:32.633 17:09:18 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:08:32.633 17:09:18 -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:08:32.633 17:09:18 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:32.633 17:09:18 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:32.633 17:09:18 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:08:32.633 17:09:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.633 17:09:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:32.633 17:09:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:08:32.633 17:09:18 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:08:32.633 17:09:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:32.633 No valid GPT data, bailing 00:08:32.633 17:09:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:32.633 17:09:18 -- scripts/common.sh@391 -- # pt= 00:08:32.633 17:09:18 -- scripts/common.sh@392 -- # return 1 00:08:32.633 17:09:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:32.633 1+0 records in 00:08:32.633 1+0 records out 00:08:32.633 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456768 s, 230 MB/s 00:08:32.633 17:09:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.633 17:09:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:32.633 17:09:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:08:32.633 17:09:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:08:32.633 17:09:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:08:32.892 No valid GPT data, bailing 00:08:32.892 17:09:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:32.892 17:09:18 -- scripts/common.sh@391 -- # pt= 00:08:32.892 17:09:18 -- scripts/common.sh@392 -- # return 1 00:08:32.892 17:09:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:08:32.892 1+0 records in 00:08:32.892 1+0 records out 00:08:32.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0032559 s, 322 MB/s 00:08:32.892 17:09:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.892 17:09:18 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:32.892 17:09:18 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:08:32.892 17:09:18 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:08:32.892 17:09:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:08:32.892 No valid GPT data, bailing 00:08:32.892 17:09:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:08:32.892 17:09:18 -- scripts/common.sh@391 -- # pt= 00:08:32.892 17:09:18 -- scripts/common.sh@392 -- # return 1 00:08:32.892 17:09:18 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:08:32.892 1+0 records in 00:08:32.892 1+0 records out 00:08:32.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0033996 s, 308 MB/s 00:08:32.892 17:09:18 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:08:32.892 17:09:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:08:32.892 17:09:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:08:32.892 17:09:19 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:08:32.892 17:09:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:08:32.892 No valid GPT data, bailing 00:08:32.892 17:09:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:08:32.892 17:09:19 -- scripts/common.sh@391 -- # pt= 00:08:32.892 17:09:19 -- scripts/common.sh@392 -- # return 1 00:08:32.892 17:09:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:08:32.892 1+0 records in 00:08:32.892 1+0 records out 00:08:32.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498116 s, 211 MB/s 00:08:32.892 17:09:19 -- spdk/autotest.sh@118 -- # sync 00:08:32.892 17:09:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:32.892 17:09:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:32.892 17:09:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:34.792 17:09:20 -- spdk/autotest.sh@124 -- # uname -s 00:08:34.792 17:09:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:08:34.792 17:09:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:34.792 17:09:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.792 17:09:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.792 17:09:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.792 ************************************ 00:08:34.792 START TEST setup.sh 00:08:34.792 ************************************ 00:08:34.792 17:09:20 setup.sh -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:08:34.792 * Looking for test storage... 00:08:34.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:34.792 17:09:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:08:34.792 17:09:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:08:34.792 17:09:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:34.792 17:09:20 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:34.792 17:09:20 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.792 17:09:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:34.792 ************************************ 00:08:34.792 START TEST acl 00:08:34.792 ************************************ 00:08:34.792 17:09:20 setup.sh.acl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:08:35.050 * Looking for test storage... 00:08:35.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:35.050 17:09:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n2 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n2 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n3 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme1n3 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:35.050 17:09:21 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:35.050 17:09:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:08:35.050 17:09:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:08:35.050 17:09:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:08:35.050 17:09:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:08:35.050 17:09:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:08:35.050 17:09:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:35.050 17:09:21 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:35.616 17:09:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:08:35.616 17:09:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:08:35.616 17:09:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:35.616 17:09:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:08:35.616 17:09:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:08:35.616 17:09:21 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.182 Hugepages 00:08:36.182 node hugesize free / total 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.182 00:08:36.182 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:08:36.182 17:09:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:08:36.441 17:09:22 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:08:36.441 17:09:22 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.441 17:09:22 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.441 17:09:22 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:36.441 ************************************ 00:08:36.441 START TEST denied 00:08:36.441 ************************************ 00:08:36.441 17:09:22 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:08:36.441 17:09:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:08:36.441 17:09:22 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:08:36.441 17:09:22 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:08:36.441 17:09:22 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:08:36.441 17:09:22 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:37.376 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:37.376 17:09:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:37.943 00:08:37.943 real 0m1.393s 00:08:37.943 user 0m0.532s 00:08:37.943 sys 0m0.818s 00:08:37.944 17:09:23 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.944 ************************************ 00:08:37.944 END TEST denied 00:08:37.944 17:09:23 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:08:37.944 ************************************ 00:08:37.944 17:09:23 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:08:37.944 17:09:23 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.944 17:09:23 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.944 17:09:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:37.944 ************************************ 00:08:37.944 START TEST allowed 00:08:37.944 ************************************ 00:08:37.944 17:09:24 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:08:37.944 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:08:37.944 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:08:37.944 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:08:37.944 17:09:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:08:37.944 17:09:24 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:38.880 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:38.880 17:09:24 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:39.446 00:08:39.446 real 0m1.504s 00:08:39.446 user 0m0.686s 00:08:39.446 sys 0m0.802s 00:08:39.446 17:09:25 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.446 17:09:25 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:08:39.446 ************************************ 00:08:39.446 END TEST allowed 00:08:39.446 ************************************ 00:08:39.446 00:08:39.446 real 0m4.589s 00:08:39.446 user 0m1.991s 00:08:39.446 sys 0m2.554s 00:08:39.446 17:09:25 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.446 17:09:25 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:08:39.446 ************************************ 00:08:39.446 END TEST acl 00:08:39.446 ************************************ 00:08:39.446 17:09:25 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:39.446 17:09:25 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.446 17:09:25 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.446 17:09:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:39.446 ************************************ 00:08:39.446 START TEST hugepages 00:08:39.446 ************************************ 00:08:39.446 17:09:25 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:08:39.446 * Looking for test storage... 00:08:39.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:39.446 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:08:39.446 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:08:39.446 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:08:39.446 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:08:39.446 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.706 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5877092 kB' 'MemAvailable: 7388792 kB' 'Buffers: 2436 kB' 'Cached: 1723096 kB' 'SwapCached: 0 kB' 'Active: 477904 kB' 'Inactive: 1352860 kB' 'Active(anon): 115720 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352860 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 308 kB' 'Writeback: 0 kB' 'AnonPages: 106892 kB' 'Mapped: 48948 kB' 'Shmem: 10488 kB' 'KReclaimable: 67188 kB' 'Slab: 140088 kB' 'SReclaimable: 67188 kB' 'SUnreclaim: 72900 kB' 'KernelStack: 6424 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 329224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.707 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:39.708 17:09:25 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:08:39.708 17:09:25 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.708 17:09:25 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.708 17:09:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:39.708 ************************************ 00:08:39.708 START TEST default_setup 00:08:39.708 ************************************ 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:08:39.708 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:08:39.709 17:09:25 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:40.275 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:40.275 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.275 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.560 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970848 kB' 'MemAvailable: 9482416 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494296 kB' 'Inactive: 1352872 kB' 'Active(anon): 132112 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123256 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139812 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6320 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.561 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970596 kB' 'MemAvailable: 9482164 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494180 kB' 'Inactive: 1352872 kB' 'Active(anon): 131996 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 123108 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139812 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.562 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.563 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970596 kB' 'MemAvailable: 9482164 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494056 kB' 'Inactive: 1352872 kB' 'Active(anon): 131872 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122980 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139812 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.564 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.565 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:40.566 nr_hugepages=1024 00:08:40.566 resv_hugepages=0 00:08:40.566 surplus_hugepages=0 00:08:40.566 anon_hugepages=0 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970596 kB' 'MemAvailable: 9482164 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494048 kB' 'Inactive: 1352872 kB' 'Active(anon): 131864 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48644 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139812 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72908 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.566 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.567 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7970596 kB' 'MemUsed: 4271384 kB' 'SwapCached: 0 kB' 'Active: 494612 kB' 'Inactive: 1352872 kB' 'Active(anon): 132428 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352872 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 196 kB' 'Writeback: 0 kB' 'FilePages: 1725524 kB' 'Mapped: 48860 kB' 'AnonPages: 123632 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139812 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.568 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:40.569 node0=1024 expecting 1024 00:08:40.569 ************************************ 00:08:40.569 END TEST default_setup 00:08:40.569 ************************************ 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:40.569 00:08:40.569 real 0m0.985s 00:08:40.569 user 0m0.459s 00:08:40.569 sys 0m0.449s 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.569 17:09:26 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:08:40.569 17:09:26 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:08:40.569 17:09:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.569 17:09:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.569 17:09:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:40.569 ************************************ 00:08:40.570 START TEST per_node_1G_alloc 00:08:40.570 ************************************ 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:40.570 17:09:26 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:40.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.121 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.121 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9026476 kB' 'MemAvailable: 10538048 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494284 kB' 'Inactive: 1352876 kB' 'Active(anon): 132100 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 123452 kB' 'Mapped: 48720 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139772 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72868 kB' 'KernelStack: 6260 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.121 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.122 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9026476 kB' 'MemAvailable: 10538048 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494400 kB' 'Inactive: 1352876 kB' 'Active(anon): 132216 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123308 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139780 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72876 kB' 'KernelStack: 6312 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.123 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.124 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9029332 kB' 'MemAvailable: 10540908 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494140 kB' 'Inactive: 1352880 kB' 'Active(anon): 131956 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 123096 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139800 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72896 kB' 'KernelStack: 6288 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 345912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.125 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.126 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:41.127 nr_hugepages=512 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:41.127 resv_hugepages=0 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:41.127 surplus_hugepages=0 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:41.127 anon_hugepages=0 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9030240 kB' 'MemAvailable: 10541816 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 493996 kB' 'Inactive: 1352880 kB' 'Active(anon): 131812 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'AnonPages: 122956 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139800 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72896 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.127 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.128 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9030628 kB' 'MemUsed: 3211352 kB' 'SwapCached: 0 kB' 'Active: 494224 kB' 'Inactive: 1352880 kB' 'Active(anon): 132040 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 204 kB' 'Writeback: 0 kB' 'FilePages: 1725528 kB' 'Mapped: 48600 kB' 'AnonPages: 123184 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139800 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.129 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:41.130 node0=512 expecting 512 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:41.130 00:08:41.130 real 0m0.518s 00:08:41.130 user 0m0.270s 00:08:41.130 sys 0m0.277s 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.130 17:09:27 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:41.130 ************************************ 00:08:41.130 END TEST per_node_1G_alloc 00:08:41.130 ************************************ 00:08:41.130 17:09:27 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:08:41.130 17:09:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.130 17:09:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.130 17:09:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:41.130 ************************************ 00:08:41.130 START TEST even_2G_alloc 00:08:41.130 ************************************ 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:41.131 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:41.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:41.702 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.702 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.702 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7980748 kB' 'MemAvailable: 9492324 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494784 kB' 'Inactive: 1352880 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123512 kB' 'Mapped: 48892 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139808 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72904 kB' 'KernelStack: 6360 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.703 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981392 kB' 'MemAvailable: 9492964 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494484 kB' 'Inactive: 1352876 kB' 'Active(anon): 132300 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 123256 kB' 'Mapped: 48632 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139796 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72892 kB' 'KernelStack: 6296 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54628 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.704 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.705 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981392 kB' 'MemAvailable: 9492968 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494144 kB' 'Inactive: 1352880 kB' 'Active(anon): 131960 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 123352 kB' 'Mapped: 48600 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139792 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72888 kB' 'KernelStack: 6288 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.706 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.707 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:41.708 nr_hugepages=1024 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:41.708 resv_hugepages=0 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:41.708 surplus_hugepages=0 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:41.708 anon_hugepages=0 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981392 kB' 'MemAvailable: 9492968 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 493984 kB' 'Inactive: 1352880 kB' 'Active(anon): 131800 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 122940 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139796 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72892 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.708 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.709 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7981392 kB' 'MemUsed: 4260588 kB' 'SwapCached: 0 kB' 'Active: 494276 kB' 'Inactive: 1352880 kB' 'Active(anon): 132092 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 1725528 kB' 'Mapped: 48608 kB' 'AnonPages: 123228 kB' 'Shmem: 10464 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139796 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.710 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:41.711 node0=1024 expecting 1024 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:41.711 00:08:41.711 real 0m0.532s 00:08:41.711 user 0m0.278s 00:08:41.711 sys 0m0.286s 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.711 17:09:27 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:41.711 ************************************ 00:08:41.711 END TEST even_2G_alloc 00:08:41.711 ************************************ 00:08:41.711 17:09:27 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:08:41.711 17:09:27 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.711 17:09:27 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.711 17:09:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:41.711 ************************************ 00:08:41.711 START TEST odd_alloc 00:08:41.711 ************************************ 00:08:41.711 17:09:27 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:08:41.711 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:08:41.711 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:08:41.711 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:41.711 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:41.712 17:09:27 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.282 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.282 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7988684 kB' 'MemAvailable: 9500260 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494564 kB' 'Inactive: 1352880 kB' 'Active(anon): 132380 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 123268 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139800 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72896 kB' 'KernelStack: 6244 kB' 'PageTables: 4124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.282 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.283 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7988180 kB' 'MemAvailable: 9499756 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494004 kB' 'Inactive: 1352880 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123232 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139820 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54660 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.284 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.285 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7988180 kB' 'MemAvailable: 9499756 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494004 kB' 'Inactive: 1352880 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 122972 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139816 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72912 kB' 'KernelStack: 6304 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.286 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.287 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:42.288 nr_hugepages=1025 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:08:42.288 resv_hugepages=0 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:42.288 surplus_hugepages=0 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:42.288 anon_hugepages=0 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7988792 kB' 'MemAvailable: 9500368 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494004 kB' 'Inactive: 1352880 kB' 'Active(anon): 131820 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 123008 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139816 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72912 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54644 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.288 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.289 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7988792 kB' 'MemUsed: 4253188 kB' 'SwapCached: 0 kB' 'Active: 494316 kB' 'Inactive: 1352880 kB' 'Active(anon): 132132 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 1725528 kB' 'Mapped: 48608 kB' 'AnonPages: 123316 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139816 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72912 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.290 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:08:42.291 node0=1025 expecting 1025 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:08:42.291 00:08:42.291 real 0m0.526s 00:08:42.291 user 0m0.285s 00:08:42.291 sys 0m0.271s 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.291 17:09:28 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:42.291 ************************************ 00:08:42.291 END TEST odd_alloc 00:08:42.291 ************************************ 00:08:42.291 17:09:28 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:08:42.291 17:09:28 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.291 17:09:28 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.291 17:09:28 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:42.291 ************************************ 00:08:42.291 START TEST custom_alloc 00:08:42.291 ************************************ 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:08:42.291 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:42.292 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.868 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:42.868 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.868 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9036156 kB' 'MemAvailable: 10547728 kB' 'Buffers: 2436 kB' 'Cached: 1723088 kB' 'SwapCached: 0 kB' 'Active: 494864 kB' 'Inactive: 1352876 kB' 'Active(anon): 132680 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352876 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123588 kB' 'Mapped: 48764 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139780 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72876 kB' 'KernelStack: 6276 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.868 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.869 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9036156 kB' 'MemAvailable: 10547732 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494688 kB' 'Inactive: 1352880 kB' 'Active(anon): 132504 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123660 kB' 'Mapped: 48776 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139772 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72868 kB' 'KernelStack: 6292 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.870 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.871 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9036408 kB' 'MemAvailable: 10547984 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494072 kB' 'Inactive: 1352880 kB' 'Active(anon): 131888 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123040 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139772 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72868 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.872 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.873 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:08:42.874 nr_hugepages=512 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:42.874 resv_hugepages=0 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:42.874 surplus_hugepages=0 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:42.874 anon_hugepages=0 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9036408 kB' 'MemAvailable: 10547984 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494424 kB' 'Inactive: 1352880 kB' 'Active(anon): 132240 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 123396 kB' 'Mapped: 48604 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139772 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72868 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.874 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:28 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:42.875 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 9036408 kB' 'MemUsed: 3205572 kB' 'SwapCached: 0 kB' 'Active: 494340 kB' 'Inactive: 1352880 kB' 'Active(anon): 132156 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 1725528 kB' 'Mapped: 48604 kB' 'AnonPages: 123304 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139768 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72864 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.876 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:08:42.877 node0=512 expecting 512 00:08:42.877 ************************************ 00:08:42.877 END TEST custom_alloc 00:08:42.877 ************************************ 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:08:42.877 00:08:42.877 real 0m0.557s 00:08:42.877 user 0m0.260s 00:08:42.877 sys 0m0.296s 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.877 17:09:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:42.877 17:09:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:08:42.877 17:09:29 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.877 17:09:29 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.877 17:09:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:42.877 ************************************ 00:08:42.877 START TEST no_shrink_alloc 00:08:42.877 ************************************ 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:08:42.877 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:42.878 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.416 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.416 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.416 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992200 kB' 'MemAvailable: 9503776 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494660 kB' 'Inactive: 1352880 kB' 'Active(anon): 132476 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 123592 kB' 'Mapped: 48712 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139808 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72904 kB' 'KernelStack: 6344 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.417 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992200 kB' 'MemAvailable: 9503776 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494464 kB' 'Inactive: 1352880 kB' 'Active(anon): 132280 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123388 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139808 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72904 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.418 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.419 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.420 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992692 kB' 'MemAvailable: 9504268 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494336 kB' 'Inactive: 1352880 kB' 'Active(anon): 132152 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123312 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139804 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72900 kB' 'KernelStack: 6320 kB' 'PageTables: 4232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.421 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.422 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:43.423 nr_hugepages=1024 00:08:43.423 resv_hugepages=0 00:08:43.423 surplus_hugepages=0 00:08:43.423 anon_hugepages=0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992692 kB' 'MemAvailable: 9504268 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494328 kB' 'Inactive: 1352880 kB' 'Active(anon): 132144 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 123300 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139800 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72896 kB' 'KernelStack: 6304 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.423 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.424 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992692 kB' 'MemUsed: 4249288 kB' 'SwapCached: 0 kB' 'Active: 494084 kB' 'Inactive: 1352880 kB' 'Active(anon): 131900 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 1725528 kB' 'Mapped: 48608 kB' 'AnonPages: 123264 kB' 'Shmem: 10464 kB' 'KernelStack: 6288 kB' 'PageTables: 4152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139792 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.425 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:43.426 node0=1024 expecting 1024 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:43.426 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:43.427 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:08:43.427 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:08:43.427 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:08:43.427 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:08:43.427 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.999 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.999 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:08:43.999 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992132 kB' 'MemAvailable: 9503708 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 495164 kB' 'Inactive: 1352880 kB' 'Active(anon): 132980 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 124152 kB' 'Mapped: 49112 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139824 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6340 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:43.999 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.000 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7992384 kB' 'MemAvailable: 9503960 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494436 kB' 'Inactive: 1352880 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123392 kB' 'Mapped: 48796 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139824 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6336 kB' 'PageTables: 4288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.001 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.002 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7993164 kB' 'MemAvailable: 9504740 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494436 kB' 'Inactive: 1352880 kB' 'Active(anon): 132252 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123372 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139824 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72920 kB' 'KernelStack: 6336 kB' 'PageTables: 4276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.003 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.004 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.005 nr_hugepages=1024 00:08:44.005 resv_hugepages=0 00:08:44.005 surplus_hugepages=0 00:08:44.005 anon_hugepages=0 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7993380 kB' 'MemAvailable: 9504956 kB' 'Buffers: 2436 kB' 'Cached: 1723092 kB' 'SwapCached: 0 kB' 'Active: 494332 kB' 'Inactive: 1352880 kB' 'Active(anon): 132148 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 123252 kB' 'Mapped: 48608 kB' 'Shmem: 10464 kB' 'KReclaimable: 66904 kB' 'Slab: 139820 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72916 kB' 'KernelStack: 6304 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 5081088 kB' 'DirectMap1G: 9437184 kB' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.005 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.006 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7993744 kB' 'MemUsed: 4248236 kB' 'SwapCached: 0 kB' 'Active: 494112 kB' 'Inactive: 1352880 kB' 'Active(anon): 131928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362184 kB' 'Inactive(file): 1352880 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 1725528 kB' 'Mapped: 48608 kB' 'AnonPages: 123328 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66904 kB' 'Slab: 139812 kB' 'SReclaimable: 66904 kB' 'SUnreclaim: 72908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.007 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:08:44.008 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:08:44.009 node0=1024 expecting 1024 00:08:44.009 ************************************ 00:08:44.009 END TEST no_shrink_alloc 00:08:44.009 ************************************ 00:08:44.009 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:08:44.009 17:09:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:08:44.009 00:08:44.009 real 0m1.082s 00:08:44.009 user 0m0.558s 00:08:44.009 sys 0m0.532s 00:08:44.009 17:09:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.009 17:09:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:08:44.009 17:09:30 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:08:44.009 ************************************ 00:08:44.009 END TEST hugepages 00:08:44.009 ************************************ 00:08:44.009 00:08:44.009 real 0m4.619s 00:08:44.009 user 0m2.270s 00:08:44.009 sys 0m2.361s 00:08:44.009 17:09:30 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:44.009 17:09:30 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:08:44.267 17:09:30 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:44.267 17:09:30 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.267 17:09:30 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.267 17:09:30 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:44.267 ************************************ 00:08:44.267 START TEST driver 00:08:44.267 ************************************ 00:08:44.267 17:09:30 setup.sh.driver -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:08:44.267 * Looking for test storage... 00:08:44.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:44.267 17:09:30 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:08:44.267 17:09:30 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:44.267 17:09:30 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:44.829 17:09:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:08:44.829 17:09:30 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:44.829 17:09:30 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:44.830 17:09:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:08:44.830 ************************************ 00:08:44.830 START TEST guess_driver 00:08:44.830 ************************************ 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:08:44.830 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:08:44.830 Looking for driver=uio_pci_generic 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:08:44.830 17:09:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:45.397 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:08:45.397 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:08:45.397 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:45.397 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:45.397 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:45.397 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:45.680 17:09:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:46.246 00:08:46.246 real 0m1.392s 00:08:46.246 user 0m0.544s 00:08:46.246 sys 0m0.855s 00:08:46.246 17:09:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.246 ************************************ 00:08:46.246 END TEST guess_driver 00:08:46.246 ************************************ 00:08:46.246 17:09:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:08:46.246 ************************************ 00:08:46.246 END TEST driver 00:08:46.246 ************************************ 00:08:46.246 00:08:46.246 real 0m2.042s 00:08:46.246 user 0m0.798s 00:08:46.246 sys 0m1.302s 00:08:46.246 17:09:32 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.246 17:09:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:08:46.246 17:09:32 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:46.246 17:09:32 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.246 17:09:32 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.246 17:09:32 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:46.246 ************************************ 00:08:46.246 START TEST devices 00:08:46.246 ************************************ 00:08:46.246 17:09:32 setup.sh.devices -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:08:46.246 * Looking for test storage... 00:08:46.246 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:08:46.246 17:09:32 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:08:46.246 17:09:32 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:08:46.246 17:09:32 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:08:46.246 17:09:32 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme1n1 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:47.180 17:09:33 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:08:47.180 No valid GPT data, bailing 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:08:47.180 No valid GPT data, bailing 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:08:47.180 No valid GPT data, bailing 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:47.180 17:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:08:47.180 17:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:08:47.180 17:09:33 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:08:47.181 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:08:47.181 17:09:33 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:08:47.181 17:09:33 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:08:47.438 No valid GPT data, bailing 00:08:47.438 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:08:47.438 17:09:33 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:08:47.438 17:09:33 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:08:47.438 17:09:33 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:08:47.438 17:09:33 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:08:47.438 17:09:33 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:08:47.438 17:09:33 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:08:47.438 17:09:33 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:47.438 17:09:33 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.438 17:09:33 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:47.438 ************************************ 00:08:47.438 START TEST nvme_mount 00:08:47.438 ************************************ 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:47.438 17:09:33 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:08:48.372 Creating new GPT entries in memory. 00:08:48.372 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:48.372 other utilities. 00:08:48.372 17:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:48.372 17:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:48.372 17:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:48.372 17:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:48.372 17:09:34 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:49.306 Creating new GPT entries in memory. 00:08:49.306 The operation has completed successfully. 00:08:49.306 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:49.306 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:49.306 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58936 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.565 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.824 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.824 17:09:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:49.824 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:49.824 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:50.083 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:50.083 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:50.342 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:50.342 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:50.342 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:50.342 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:50.342 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:50.601 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:50.873 17:09:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.132 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:51.390 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:51.391 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:51.391 17:09:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:51.391 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:51.391 00:08:51.391 real 0m4.029s 00:08:51.391 user 0m0.700s 00:08:51.391 sys 0m1.067s 00:08:51.391 17:09:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:51.391 17:09:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:08:51.391 ************************************ 00:08:51.391 END TEST nvme_mount 00:08:51.391 ************************************ 00:08:51.391 17:09:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:08:51.391 17:09:37 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:51.391 17:09:37 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:51.391 17:09:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:51.391 ************************************ 00:08:51.391 START TEST dm_mount 00:08:51.391 ************************************ 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:08:51.391 17:09:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:08:52.352 Creating new GPT entries in memory. 00:08:52.352 GPT data structures destroyed! You may now partition the disk using fdisk or 00:08:52.352 other utilities. 00:08:52.352 17:09:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:08:52.352 17:09:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:52.352 17:09:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:52.352 17:09:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:52.352 17:09:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:08:53.730 Creating new GPT entries in memory. 00:08:53.730 The operation has completed successfully. 00:08:53.730 17:09:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:53.730 17:09:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:53.730 17:09:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:08:53.730 17:09:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:08:53.730 17:09:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:08:54.667 The operation has completed successfully. 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59372 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:54.667 17:09:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:54.925 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:54.925 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:08:54.925 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:54.925 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:54.925 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:54.925 17:09:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:54.925 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:54.925 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:54.925 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:54.925 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:55.185 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:55.444 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:08:55.704 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:55.704 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:08:55.704 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:08:55.704 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:55.704 17:09:41 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:08:55.704 00:08:55.704 real 0m4.163s 00:08:55.704 user 0m0.427s 00:08:55.704 sys 0m0.694s 00:08:55.704 17:09:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.704 17:09:41 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:08:55.704 ************************************ 00:08:55.704 END TEST dm_mount 00:08:55.704 ************************************ 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:08:55.704 17:09:41 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:08:55.964 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:55.964 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:55.964 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:55.964 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:08:55.964 17:09:42 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:08:55.964 00:08:55.964 real 0m9.694s 00:08:55.964 user 0m1.759s 00:08:55.964 sys 0m2.344s 00:08:55.964 17:09:42 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.964 ************************************ 00:08:55.964 17:09:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:08:55.964 END TEST devices 00:08:55.964 ************************************ 00:08:55.964 00:08:55.964 real 0m21.217s 00:08:55.964 user 0m6.907s 00:08:55.964 sys 0m8.737s 00:08:55.964 17:09:42 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.964 17:09:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:08:55.964 ************************************ 00:08:55.964 END TEST setup.sh 00:08:55.964 ************************************ 00:08:55.964 17:09:42 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:08:56.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:56.531 Hugepages 00:08:56.531 node hugesize free / total 00:08:56.531 node0 1048576kB 0 / 0 00:08:56.531 node0 2048kB 2048 / 2048 00:08:56.531 00:08:56.531 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:56.791 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:08:56.791 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:08:56.791 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:08:56.791 17:09:42 -- spdk/autotest.sh@130 -- # uname -s 00:08:56.791 17:09:42 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:08:56.791 17:09:42 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:08:56.791 17:09:42 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:57.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:57.618 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.618 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:57.618 17:09:43 -- common/autotest_common.sh@1532 -- # sleep 1 00:08:58.687 17:09:44 -- common/autotest_common.sh@1533 -- # bdfs=() 00:08:58.687 17:09:44 -- common/autotest_common.sh@1533 -- # local bdfs 00:08:58.687 17:09:44 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:08:58.687 17:09:44 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:08:58.687 17:09:44 -- common/autotest_common.sh@1513 -- # bdfs=() 00:08:58.687 17:09:44 -- common/autotest_common.sh@1513 -- # local bdfs 00:08:58.687 17:09:44 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.687 17:09:44 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.687 17:09:44 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:08:58.688 17:09:44 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:08:58.688 17:09:44 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:08:58.688 17:09:44 -- common/autotest_common.sh@1536 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:58.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.952 Waiting for block devices as requested 00:08:58.952 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.214 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:59.214 17:09:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:59.214 17:09:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1502 -- # grep 0000:00:10.0/nvme/nvme 00:08:59.214 17:09:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme1 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:08:59.214 17:09:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:59.214 17:09:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:59.214 17:09:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1557 -- # continue 00:08:59.214 17:09:45 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:08:59.214 17:09:45 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:08:59.214 17:09:45 -- common/autotest_common.sh@1502 -- # grep 0000:00:11.0/nvme/nvme 00:08:59.214 17:09:45 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # grep oacs 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:08:59.214 17:09:45 -- common/autotest_common.sh@1545 -- # oacs=' 0x12a' 00:08:59.214 17:09:45 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:08:59.214 17:09:45 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:08:59.214 17:09:45 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:08:59.214 17:09:45 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:08:59.214 17:09:45 -- common/autotest_common.sh@1557 -- # continue 00:08:59.214 17:09:45 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:08:59.214 17:09:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.214 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 17:09:45 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:08:59.474 17:09:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.474 17:09:45 -- common/autotest_common.sh@10 -- # set +x 00:08:59.474 17:09:45 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:00.046 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.046 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.046 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:00.046 17:09:46 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:09:00.046 17:09:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:00.046 17:09:46 -- common/autotest_common.sh@10 -- # set +x 00:09:00.305 17:09:46 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:09:00.305 17:09:46 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:09:00.305 17:09:46 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:09:00.305 17:09:46 -- common/autotest_common.sh@1577 -- # bdfs=() 00:09:00.305 17:09:46 -- common/autotest_common.sh@1577 -- # local bdfs 00:09:00.305 17:09:46 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:09:00.305 17:09:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:09:00.305 17:09:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:09:00.305 17:09:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:00.305 17:09:46 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:00.305 17:09:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:09:00.305 17:09:46 -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:09:00.305 17:09:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:09:00.305 17:09:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:09:00.305 17:09:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:09:00.305 17:09:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:09:00.305 17:09:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:00.305 17:09:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:09:00.305 17:09:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:09:00.305 17:09:46 -- common/autotest_common.sh@1580 -- # device=0x0010 00:09:00.305 17:09:46 -- common/autotest_common.sh@1581 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:09:00.305 17:09:46 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:09:00.305 17:09:46 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:09:00.305 17:09:46 -- common/autotest_common.sh@1593 -- # return 0 00:09:00.305 17:09:46 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:09:00.305 17:09:46 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:09:00.305 17:09:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:00.305 17:09:46 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:09:00.305 17:09:46 -- spdk/autotest.sh@162 -- # timing_enter lib 00:09:00.305 17:09:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.305 17:09:46 -- common/autotest_common.sh@10 -- # set +x 00:09:00.306 17:09:46 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:09:00.306 17:09:46 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:00.306 17:09:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.306 17:09:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.306 17:09:46 -- common/autotest_common.sh@10 -- # set +x 00:09:00.306 ************************************ 00:09:00.306 START TEST env 00:09:00.306 ************************************ 00:09:00.306 17:09:46 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:09:00.306 * Looking for test storage... 00:09:00.306 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:09:00.306 17:09:46 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:00.306 17:09:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.306 17:09:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.306 17:09:46 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.306 ************************************ 00:09:00.306 START TEST env_memory 00:09:00.306 ************************************ 00:09:00.306 17:09:46 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:09:00.306 00:09:00.306 00:09:00.306 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.306 http://cunit.sourceforge.net/ 00:09:00.306 00:09:00.306 00:09:00.306 Suite: memory 00:09:00.306 Test: alloc and free memory map ...[2024-07-24 17:09:46.510472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:09:00.306 passed 00:09:00.566 Test: mem map translation ...[2024-07-24 17:09:46.539748] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:09:00.566 [2024-07-24 17:09:46.539835] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:09:00.566 [2024-07-24 17:09:46.539899] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:09:00.566 [2024-07-24 17:09:46.539913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:09:00.566 passed 00:09:00.566 Test: mem map registration ...[2024-07-24 17:09:46.598524] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:09:00.566 [2024-07-24 17:09:46.598620] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:09:00.566 passed 00:09:00.566 Test: mem map adjacent registrations ...passed 00:09:00.566 00:09:00.566 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.566 suites 1 1 n/a 0 0 00:09:00.566 tests 4 4 4 0 0 00:09:00.566 asserts 152 152 152 0 n/a 00:09:00.566 00:09:00.566 Elapsed time = 0.186 seconds 00:09:00.566 00:09:00.566 real 0m0.203s 00:09:00.566 user 0m0.187s 00:09:00.566 sys 0m0.013s 00:09:00.567 17:09:46 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.567 ************************************ 00:09:00.567 END TEST env_memory 00:09:00.567 ************************************ 00:09:00.567 17:09:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:09:00.567 17:09:46 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:00.567 17:09:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.567 17:09:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.567 17:09:46 env -- common/autotest_common.sh@10 -- # set +x 00:09:00.567 ************************************ 00:09:00.567 START TEST env_vtophys 00:09:00.567 ************************************ 00:09:00.567 17:09:46 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:09:00.567 EAL: lib.eal log level changed from notice to debug 00:09:00.567 EAL: Detected lcore 0 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 1 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 2 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 3 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 4 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 5 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 6 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 7 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 8 as core 0 on socket 0 00:09:00.567 EAL: Detected lcore 9 as core 0 on socket 0 00:09:00.567 EAL: Maximum logical cores by configuration: 128 00:09:00.567 EAL: Detected CPU lcores: 10 00:09:00.567 EAL: Detected NUMA nodes: 1 00:09:00.567 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:09:00.567 EAL: Detected shared linkage of DPDK 00:09:00.567 EAL: No shared files mode enabled, IPC will be disabled 00:09:00.567 EAL: Selected IOVA mode 'PA' 00:09:00.567 EAL: Probing VFIO support... 00:09:00.567 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:00.567 EAL: VFIO modules not loaded, skipping VFIO support... 00:09:00.567 EAL: Ask a virtual area of 0x2e000 bytes 00:09:00.567 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:09:00.567 EAL: Setting up physically contiguous memory... 00:09:00.567 EAL: Setting maximum number of open files to 524288 00:09:00.567 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:09:00.567 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:09:00.567 EAL: Ask a virtual area of 0x61000 bytes 00:09:00.567 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:09:00.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:00.567 EAL: Ask a virtual area of 0x400000000 bytes 00:09:00.567 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:09:00.567 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:09:00.567 EAL: Ask a virtual area of 0x61000 bytes 00:09:00.567 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:09:00.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:00.567 EAL: Ask a virtual area of 0x400000000 bytes 00:09:00.567 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:09:00.567 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:09:00.567 EAL: Ask a virtual area of 0x61000 bytes 00:09:00.567 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:09:00.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:00.567 EAL: Ask a virtual area of 0x400000000 bytes 00:09:00.567 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:09:00.567 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:09:00.567 EAL: Ask a virtual area of 0x61000 bytes 00:09:00.567 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:09:00.567 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:09:00.567 EAL: Ask a virtual area of 0x400000000 bytes 00:09:00.567 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:09:00.567 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:09:00.567 EAL: Hugepages will be freed exactly as allocated. 00:09:00.567 EAL: No shared files mode enabled, IPC is disabled 00:09:00.567 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: TSC frequency is ~2200000 KHz 00:09:00.827 EAL: Main lcore 0 is ready (tid=7f7113eeda00;cpuset=[0]) 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 0 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 2MB 00:09:00.827 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:09:00.827 EAL: No PCI address specified using 'addr=' in: bus=pci 00:09:00.827 EAL: Mem event callback 'spdk:(nil)' registered 00:09:00.827 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:09:00.827 00:09:00.827 00:09:00.827 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.827 http://cunit.sourceforge.net/ 00:09:00.827 00:09:00.827 00:09:00.827 Suite: components_suite 00:09:00.827 Test: vtophys_malloc_test ...passed 00:09:00.827 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 4MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 4MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 6MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 6MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 10MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 10MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 18MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 18MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 34MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 34MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 66MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 66MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:00.827 EAL: Restoring previous memory policy: 4 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was expanded by 130MB 00:09:00.827 EAL: Calling mem event callback 'spdk:(nil)' 00:09:00.827 EAL: request: mp_malloc_sync 00:09:00.827 EAL: No shared files mode enabled, IPC is disabled 00:09:00.827 EAL: Heap on socket 0 was shrunk by 130MB 00:09:00.827 EAL: Trying to obtain current memory policy. 00:09:00.827 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.086 EAL: Restoring previous memory policy: 4 00:09:01.086 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.086 EAL: request: mp_malloc_sync 00:09:01.087 EAL: No shared files mode enabled, IPC is disabled 00:09:01.087 EAL: Heap on socket 0 was expanded by 258MB 00:09:01.087 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.087 EAL: request: mp_malloc_sync 00:09:01.087 EAL: No shared files mode enabled, IPC is disabled 00:09:01.087 EAL: Heap on socket 0 was shrunk by 258MB 00:09:01.087 EAL: Trying to obtain current memory policy. 00:09:01.087 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.346 EAL: Restoring previous memory policy: 4 00:09:01.347 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.347 EAL: request: mp_malloc_sync 00:09:01.347 EAL: No shared files mode enabled, IPC is disabled 00:09:01.347 EAL: Heap on socket 0 was expanded by 514MB 00:09:01.347 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.347 EAL: request: mp_malloc_sync 00:09:01.347 EAL: No shared files mode enabled, IPC is disabled 00:09:01.347 EAL: Heap on socket 0 was shrunk by 514MB 00:09:01.347 EAL: Trying to obtain current memory policy. 00:09:01.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:09:01.606 EAL: Restoring previous memory policy: 4 00:09:01.606 EAL: Calling mem event callback 'spdk:(nil)' 00:09:01.606 EAL: request: mp_malloc_sync 00:09:01.606 EAL: No shared files mode enabled, IPC is disabled 00:09:01.606 EAL: Heap on socket 0 was expanded by 1026MB 00:09:01.866 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.126 EAL: request: mp_malloc_sync 00:09:02.126 EAL: No shared files mode enabled, IPC is disabled 00:09:02.126 EAL: Heap on socket 0 was shrunk by 1026MB 00:09:02.126 passed 00:09:02.126 00:09:02.126 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.126 suites 1 1 n/a 0 0 00:09:02.126 tests 2 2 2 0 0 00:09:02.126 asserts 5239 5239 5239 0 n/a 00:09:02.126 00:09:02.126 Elapsed time = 1.281 seconds 00:09:02.126 EAL: Calling mem event callback 'spdk:(nil)' 00:09:02.126 EAL: request: mp_malloc_sync 00:09:02.126 EAL: No shared files mode enabled, IPC is disabled 00:09:02.126 EAL: Heap on socket 0 was shrunk by 2MB 00:09:02.126 EAL: No shared files mode enabled, IPC is disabled 00:09:02.126 EAL: No shared files mode enabled, IPC is disabled 00:09:02.126 EAL: No shared files mode enabled, IPC is disabled 00:09:02.126 00:09:02.126 real 0m1.474s 00:09:02.126 user 0m0.794s 00:09:02.126 sys 0m0.543s 00:09:02.126 17:09:48 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.126 17:09:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:09:02.126 ************************************ 00:09:02.126 END TEST env_vtophys 00:09:02.126 ************************************ 00:09:02.126 17:09:48 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:02.126 17:09:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.126 17:09:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.126 17:09:48 env -- common/autotest_common.sh@10 -- # set +x 00:09:02.126 ************************************ 00:09:02.126 START TEST env_pci 00:09:02.126 ************************************ 00:09:02.126 17:09:48 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:09:02.126 00:09:02.126 00:09:02.126 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.126 http://cunit.sourceforge.net/ 00:09:02.126 00:09:02.126 00:09:02.126 Suite: pci 00:09:02.126 Test: pci_hook ...[2024-07-24 17:09:48.255752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60566 has claimed it 00:09:02.126 passed 00:09:02.126 00:09:02.126 EAL: Cannot find device (10000:00:01.0) 00:09:02.126 EAL: Failed to attach device on primary process 00:09:02.126 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.126 suites 1 1 n/a 0 0 00:09:02.126 tests 1 1 1 0 0 00:09:02.126 asserts 25 25 25 0 n/a 00:09:02.126 00:09:02.126 Elapsed time = 0.003 seconds 00:09:02.126 00:09:02.126 real 0m0.020s 00:09:02.126 user 0m0.008s 00:09:02.126 sys 0m0.012s 00:09:02.126 17:09:48 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.126 17:09:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:09:02.126 ************************************ 00:09:02.126 END TEST env_pci 00:09:02.126 ************************************ 00:09:02.127 17:09:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:09:02.127 17:09:48 env -- env/env.sh@15 -- # uname 00:09:02.127 17:09:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:09:02.127 17:09:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:09:02.127 17:09:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:02.127 17:09:48 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:02.127 17:09:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.127 17:09:48 env -- common/autotest_common.sh@10 -- # set +x 00:09:02.127 ************************************ 00:09:02.127 START TEST env_dpdk_post_init 00:09:02.127 ************************************ 00:09:02.127 17:09:48 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:09:02.127 EAL: Detected CPU lcores: 10 00:09:02.127 EAL: Detected NUMA nodes: 1 00:09:02.127 EAL: Detected shared linkage of DPDK 00:09:02.127 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:02.127 EAL: Selected IOVA mode 'PA' 00:09:02.385 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:02.385 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:09:02.385 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:09:02.385 Starting DPDK initialization... 00:09:02.385 Starting SPDK post initialization... 00:09:02.385 SPDK NVMe probe 00:09:02.385 Attaching to 0000:00:10.0 00:09:02.385 Attaching to 0000:00:11.0 00:09:02.385 Attached to 0000:00:10.0 00:09:02.385 Attached to 0000:00:11.0 00:09:02.385 Cleaning up... 00:09:02.385 00:09:02.385 real 0m0.183s 00:09:02.385 user 0m0.051s 00:09:02.385 sys 0m0.033s 00:09:02.385 17:09:48 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.385 17:09:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:09:02.385 ************************************ 00:09:02.385 END TEST env_dpdk_post_init 00:09:02.385 ************************************ 00:09:02.385 17:09:48 env -- env/env.sh@26 -- # uname 00:09:02.385 17:09:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:09:02.385 17:09:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:02.385 17:09:48 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.385 17:09:48 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.386 17:09:48 env -- common/autotest_common.sh@10 -- # set +x 00:09:02.386 ************************************ 00:09:02.386 START TEST env_mem_callbacks 00:09:02.386 ************************************ 00:09:02.386 17:09:48 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:09:02.386 EAL: Detected CPU lcores: 10 00:09:02.386 EAL: Detected NUMA nodes: 1 00:09:02.386 EAL: Detected shared linkage of DPDK 00:09:02.386 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:09:02.386 EAL: Selected IOVA mode 'PA' 00:09:02.644 00:09:02.644 00:09:02.644 CUnit - A unit testing framework for C - Version 2.1-3 00:09:02.644 http://cunit.sourceforge.net/ 00:09:02.644 00:09:02.644 00:09:02.644 Suite: memory 00:09:02.645 Test: test ... 00:09:02.645 register 0x200000200000 2097152 00:09:02.645 malloc 3145728 00:09:02.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:09:02.645 register 0x200000400000 4194304 00:09:02.645 buf 0x200000500000 len 3145728 PASSED 00:09:02.645 malloc 64 00:09:02.645 buf 0x2000004fff40 len 64 PASSED 00:09:02.645 malloc 4194304 00:09:02.645 register 0x200000800000 6291456 00:09:02.645 buf 0x200000a00000 len 4194304 PASSED 00:09:02.645 free 0x200000500000 3145728 00:09:02.645 free 0x2000004fff40 64 00:09:02.645 unregister 0x200000400000 4194304 PASSED 00:09:02.645 free 0x200000a00000 4194304 00:09:02.645 unregister 0x200000800000 6291456 PASSED 00:09:02.645 malloc 8388608 00:09:02.645 register 0x200000400000 10485760 00:09:02.645 buf 0x200000600000 len 8388608 PASSED 00:09:02.645 free 0x200000600000 8388608 00:09:02.645 unregister 0x200000400000 10485760 PASSED 00:09:02.645 passed 00:09:02.645 00:09:02.645 Run Summary: Type Total Ran Passed Failed Inactive 00:09:02.645 suites 1 1 n/a 0 0 00:09:02.645 tests 1 1 1 0 0 00:09:02.645 asserts 15 15 15 0 n/a 00:09:02.645 00:09:02.645 Elapsed time = 0.007 seconds 00:09:02.645 00:09:02.645 real 0m0.140s 00:09:02.645 user 0m0.018s 00:09:02.645 sys 0m0.021s 00:09:02.645 17:09:48 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.645 17:09:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:09:02.645 ************************************ 00:09:02.645 END TEST env_mem_callbacks 00:09:02.645 ************************************ 00:09:02.645 00:09:02.645 real 0m2.352s 00:09:02.645 user 0m1.162s 00:09:02.645 sys 0m0.841s 00:09:02.645 17:09:48 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.645 ************************************ 00:09:02.645 END TEST env 00:09:02.645 ************************************ 00:09:02.645 17:09:48 env -- common/autotest_common.sh@10 -- # set +x 00:09:02.645 17:09:48 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:02.645 17:09:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.645 17:09:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.645 17:09:48 -- common/autotest_common.sh@10 -- # set +x 00:09:02.645 ************************************ 00:09:02.645 START TEST rpc 00:09:02.645 ************************************ 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:09:02.645 * Looking for test storage... 00:09:02.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:02.645 17:09:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60670 00:09:02.645 17:09:48 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:09:02.645 17:09:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.645 17:09:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60670 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@831 -- # '[' -z 60670 ']' 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:02.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:02.645 17:09:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.904 [2024-07-24 17:09:48.940912] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:02.904 [2024-07-24 17:09:48.941009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60670 ] 00:09:02.904 [2024-07-24 17:09:49.077339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.162 [2024-07-24 17:09:49.202391] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:09:03.162 [2024-07-24 17:09:49.202447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60670' to capture a snapshot of events at runtime. 00:09:03.162 [2024-07-24 17:09:49.202459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.162 [2024-07-24 17:09:49.202468] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.162 [2024-07-24 17:09:49.202475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60670 for offline analysis/debug. 00:09:03.162 [2024-07-24 17:09:49.202527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.800 17:09:49 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:03.800 17:09:49 rpc -- common/autotest_common.sh@864 -- # return 0 00:09:03.800 17:09:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:03.800 17:09:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:09:03.800 17:09:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:03.800 17:09:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:03.800 17:09:49 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:03.800 17:09:49 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:03.800 17:09:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:03.800 ************************************ 00:09:03.800 START TEST rpc_integrity 00:09:03.800 ************************************ 00:09:03.800 17:09:49 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:03.800 17:09:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:03.800 17:09:49 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.800 17:09:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.800 17:09:49 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.800 17:09:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:03.800 17:09:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:03.800 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:03.800 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:03.800 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.800 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:03.800 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.800 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:03.800 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:03.800 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.800 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.059 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:04.059 { 00:09:04.059 "aliases": [ 00:09:04.059 "1957c704-9e91-4cf3-8630-acabd9067a63" 00:09:04.059 ], 00:09:04.059 "assigned_rate_limits": { 00:09:04.059 "r_mbytes_per_sec": 0, 00:09:04.059 "rw_ios_per_sec": 0, 00:09:04.059 "rw_mbytes_per_sec": 0, 00:09:04.059 "w_mbytes_per_sec": 0 00:09:04.059 }, 00:09:04.059 "block_size": 512, 00:09:04.059 "claimed": false, 00:09:04.059 "driver_specific": {}, 00:09:04.059 "memory_domains": [ 00:09:04.059 { 00:09:04.059 "dma_device_id": "system", 00:09:04.059 "dma_device_type": 1 00:09:04.059 }, 00:09:04.059 { 00:09:04.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.059 "dma_device_type": 2 00:09:04.059 } 00:09:04.059 ], 00:09:04.059 "name": "Malloc0", 00:09:04.059 "num_blocks": 16384, 00:09:04.059 "product_name": "Malloc disk", 00:09:04.059 "supported_io_types": { 00:09:04.059 "abort": true, 00:09:04.059 "compare": false, 00:09:04.059 "compare_and_write": false, 00:09:04.059 "copy": true, 00:09:04.059 "flush": true, 00:09:04.059 "get_zone_info": false, 00:09:04.059 "nvme_admin": false, 00:09:04.059 "nvme_io": false, 00:09:04.059 "nvme_io_md": false, 00:09:04.059 "nvme_iov_md": false, 00:09:04.059 "read": true, 00:09:04.059 "reset": true, 00:09:04.059 "seek_data": false, 00:09:04.059 "seek_hole": false, 00:09:04.059 "unmap": true, 00:09:04.059 "write": true, 00:09:04.059 "write_zeroes": true, 00:09:04.059 "zcopy": true, 00:09:04.059 "zone_append": false, 00:09:04.059 "zone_management": false 00:09:04.059 }, 00:09:04.059 "uuid": "1957c704-9e91-4cf3-8630-acabd9067a63", 00:09:04.059 "zoned": false 00:09:04.059 } 00:09:04.059 ]' 00:09:04.059 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:04.059 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:04.059 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:04.059 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.059 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.059 [2024-07-24 17:09:50.091893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:04.060 [2024-07-24 17:09:50.091951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.060 [2024-07-24 17:09:50.091970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc36ad0 00:09:04.060 [2024-07-24 17:09:50.091980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.060 [2024-07-24 17:09:50.093680] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.060 [2024-07-24 17:09:50.093714] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:04.060 Passthru0 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:04.060 { 00:09:04.060 "aliases": [ 00:09:04.060 "1957c704-9e91-4cf3-8630-acabd9067a63" 00:09:04.060 ], 00:09:04.060 "assigned_rate_limits": { 00:09:04.060 "r_mbytes_per_sec": 0, 00:09:04.060 "rw_ios_per_sec": 0, 00:09:04.060 "rw_mbytes_per_sec": 0, 00:09:04.060 "w_mbytes_per_sec": 0 00:09:04.060 }, 00:09:04.060 "block_size": 512, 00:09:04.060 "claim_type": "exclusive_write", 00:09:04.060 "claimed": true, 00:09:04.060 "driver_specific": {}, 00:09:04.060 "memory_domains": [ 00:09:04.060 { 00:09:04.060 "dma_device_id": "system", 00:09:04.060 "dma_device_type": 1 00:09:04.060 }, 00:09:04.060 { 00:09:04.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.060 "dma_device_type": 2 00:09:04.060 } 00:09:04.060 ], 00:09:04.060 "name": "Malloc0", 00:09:04.060 "num_blocks": 16384, 00:09:04.060 "product_name": "Malloc disk", 00:09:04.060 "supported_io_types": { 00:09:04.060 "abort": true, 00:09:04.060 "compare": false, 00:09:04.060 "compare_and_write": false, 00:09:04.060 "copy": true, 00:09:04.060 "flush": true, 00:09:04.060 "get_zone_info": false, 00:09:04.060 "nvme_admin": false, 00:09:04.060 "nvme_io": false, 00:09:04.060 "nvme_io_md": false, 00:09:04.060 "nvme_iov_md": false, 00:09:04.060 "read": true, 00:09:04.060 "reset": true, 00:09:04.060 "seek_data": false, 00:09:04.060 "seek_hole": false, 00:09:04.060 "unmap": true, 00:09:04.060 "write": true, 00:09:04.060 "write_zeroes": true, 00:09:04.060 "zcopy": true, 00:09:04.060 "zone_append": false, 00:09:04.060 "zone_management": false 00:09:04.060 }, 00:09:04.060 "uuid": "1957c704-9e91-4cf3-8630-acabd9067a63", 00:09:04.060 "zoned": false 00:09:04.060 }, 00:09:04.060 { 00:09:04.060 "aliases": [ 00:09:04.060 "f600f137-fbec-5bdf-9c58-cba077b0f6df" 00:09:04.060 ], 00:09:04.060 "assigned_rate_limits": { 00:09:04.060 "r_mbytes_per_sec": 0, 00:09:04.060 "rw_ios_per_sec": 0, 00:09:04.060 "rw_mbytes_per_sec": 0, 00:09:04.060 "w_mbytes_per_sec": 0 00:09:04.060 }, 00:09:04.060 "block_size": 512, 00:09:04.060 "claimed": false, 00:09:04.060 "driver_specific": { 00:09:04.060 "passthru": { 00:09:04.060 "base_bdev_name": "Malloc0", 00:09:04.060 "name": "Passthru0" 00:09:04.060 } 00:09:04.060 }, 00:09:04.060 "memory_domains": [ 00:09:04.060 { 00:09:04.060 "dma_device_id": "system", 00:09:04.060 "dma_device_type": 1 00:09:04.060 }, 00:09:04.060 { 00:09:04.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.060 "dma_device_type": 2 00:09:04.060 } 00:09:04.060 ], 00:09:04.060 "name": "Passthru0", 00:09:04.060 "num_blocks": 16384, 00:09:04.060 "product_name": "passthru", 00:09:04.060 "supported_io_types": { 00:09:04.060 "abort": true, 00:09:04.060 "compare": false, 00:09:04.060 "compare_and_write": false, 00:09:04.060 "copy": true, 00:09:04.060 "flush": true, 00:09:04.060 "get_zone_info": false, 00:09:04.060 "nvme_admin": false, 00:09:04.060 "nvme_io": false, 00:09:04.060 "nvme_io_md": false, 00:09:04.060 "nvme_iov_md": false, 00:09:04.060 "read": true, 00:09:04.060 "reset": true, 00:09:04.060 "seek_data": false, 00:09:04.060 "seek_hole": false, 00:09:04.060 "unmap": true, 00:09:04.060 "write": true, 00:09:04.060 "write_zeroes": true, 00:09:04.060 "zcopy": true, 00:09:04.060 "zone_append": false, 00:09:04.060 "zone_management": false 00:09:04.060 }, 00:09:04.060 "uuid": "f600f137-fbec-5bdf-9c58-cba077b0f6df", 00:09:04.060 "zoned": false 00:09:04.060 } 00:09:04.060 ]' 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:04.060 17:09:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:04.060 00:09:04.060 real 0m0.303s 00:09:04.060 user 0m0.192s 00:09:04.060 sys 0m0.034s 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.060 17:09:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 ************************************ 00:09:04.060 END TEST rpc_integrity 00:09:04.060 ************************************ 00:09:04.060 17:09:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:04.060 17:09:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.060 17:09:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.060 17:09:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.060 ************************************ 00:09:04.060 START TEST rpc_plugins 00:09:04.060 ************************************ 00:09:04.060 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:09:04.060 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:04.060 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.060 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:04.319 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.319 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:04.319 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:04.319 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.319 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:04.319 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.319 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:04.319 { 00:09:04.319 "aliases": [ 00:09:04.319 "2bbe67f5-1b08-4390-8310-03f01233b694" 00:09:04.319 ], 00:09:04.319 "assigned_rate_limits": { 00:09:04.319 "r_mbytes_per_sec": 0, 00:09:04.319 "rw_ios_per_sec": 0, 00:09:04.319 "rw_mbytes_per_sec": 0, 00:09:04.319 "w_mbytes_per_sec": 0 00:09:04.319 }, 00:09:04.319 "block_size": 4096, 00:09:04.319 "claimed": false, 00:09:04.319 "driver_specific": {}, 00:09:04.319 "memory_domains": [ 00:09:04.319 { 00:09:04.319 "dma_device_id": "system", 00:09:04.319 "dma_device_type": 1 00:09:04.319 }, 00:09:04.319 { 00:09:04.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.319 "dma_device_type": 2 00:09:04.319 } 00:09:04.319 ], 00:09:04.319 "name": "Malloc1", 00:09:04.319 "num_blocks": 256, 00:09:04.319 "product_name": "Malloc disk", 00:09:04.319 "supported_io_types": { 00:09:04.319 "abort": true, 00:09:04.319 "compare": false, 00:09:04.319 "compare_and_write": false, 00:09:04.319 "copy": true, 00:09:04.319 "flush": true, 00:09:04.319 "get_zone_info": false, 00:09:04.319 "nvme_admin": false, 00:09:04.319 "nvme_io": false, 00:09:04.319 "nvme_io_md": false, 00:09:04.319 "nvme_iov_md": false, 00:09:04.319 "read": true, 00:09:04.319 "reset": true, 00:09:04.319 "seek_data": false, 00:09:04.319 "seek_hole": false, 00:09:04.319 "unmap": true, 00:09:04.319 "write": true, 00:09:04.319 "write_zeroes": true, 00:09:04.319 "zcopy": true, 00:09:04.319 "zone_append": false, 00:09:04.320 "zone_management": false 00:09:04.320 }, 00:09:04.320 "uuid": "2bbe67f5-1b08-4390-8310-03f01233b694", 00:09:04.320 "zoned": false 00:09:04.320 } 00:09:04.320 ]' 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:04.320 17:09:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:04.320 00:09:04.320 real 0m0.150s 00:09:04.320 user 0m0.104s 00:09:04.320 sys 0m0.016s 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.320 17:09:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 ************************************ 00:09:04.320 END TEST rpc_plugins 00:09:04.320 ************************************ 00:09:04.320 17:09:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:04.320 17:09:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.320 17:09:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.320 17:09:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 ************************************ 00:09:04.320 START TEST rpc_trace_cmd_test 00:09:04.320 ************************************ 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:04.320 "bdev": { 00:09:04.320 "mask": "0x8", 00:09:04.320 "tpoint_mask": "0xffffffffffffffff" 00:09:04.320 }, 00:09:04.320 "bdev_nvme": { 00:09:04.320 "mask": "0x4000", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "blobfs": { 00:09:04.320 "mask": "0x80", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "dsa": { 00:09:04.320 "mask": "0x200", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "ftl": { 00:09:04.320 "mask": "0x40", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "iaa": { 00:09:04.320 "mask": "0x1000", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "iscsi_conn": { 00:09:04.320 "mask": "0x2", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "nvme_pcie": { 00:09:04.320 "mask": "0x800", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "nvme_tcp": { 00:09:04.320 "mask": "0x2000", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "nvmf_rdma": { 00:09:04.320 "mask": "0x10", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "nvmf_tcp": { 00:09:04.320 "mask": "0x20", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "scsi": { 00:09:04.320 "mask": "0x4", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "sock": { 00:09:04.320 "mask": "0x8000", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "thread": { 00:09:04.320 "mask": "0x400", 00:09:04.320 "tpoint_mask": "0x0" 00:09:04.320 }, 00:09:04.320 "tpoint_group_mask": "0x8", 00:09:04.320 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60670" 00:09:04.320 }' 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:04.320 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:04.579 00:09:04.579 real 0m0.273s 00:09:04.579 user 0m0.232s 00:09:04.579 sys 0m0.031s 00:09:04.579 17:09:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.580 ************************************ 00:09:04.580 END TEST rpc_trace_cmd_test 00:09:04.580 17:09:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.580 ************************************ 00:09:04.580 17:09:50 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:09:04.580 17:09:50 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:09:04.580 17:09:50 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:04.580 17:09:50 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.580 17:09:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.580 ************************************ 00:09:04.580 START TEST go_rpc 00:09:04.580 ************************************ 00:09:04.580 17:09:50 rpc.go_rpc -- common/autotest_common.sh@1125 -- # go_rpc 00:09:04.580 17:09:50 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:09:04.839 17:09:50 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.839 17:09:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.839 17:09:50 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["b826a01b-439c-49b0-b669-70950f02efe7"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"b826a01b-439c-49b0-b669-70950f02efe7","zoned":false}]' 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:09:04.839 17:09:50 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:04.839 17:09:50 rpc.go_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.840 17:09:50 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.840 17:09:50 rpc.go_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.840 17:09:50 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:09:04.840 17:09:50 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:09:04.840 17:09:50 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:09:04.840 17:09:51 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:09:04.840 00:09:04.840 real 0m0.237s 00:09:04.840 user 0m0.161s 00:09:04.840 sys 0m0.040s 00:09:04.840 17:09:51 rpc.go_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.840 ************************************ 00:09:04.840 17:09:51 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.840 END TEST go_rpc 00:09:04.840 ************************************ 00:09:05.099 17:09:51 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:05.099 17:09:51 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:05.099 17:09:51 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:05.099 17:09:51 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:05.099 17:09:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.099 ************************************ 00:09:05.099 START TEST rpc_daemon_integrity 00:09:05.099 ************************************ 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.099 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:05.099 { 00:09:05.099 "aliases": [ 00:09:05.099 "7ea0fe66-2156-412d-a872-cce4aa6a21f4" 00:09:05.099 ], 00:09:05.099 "assigned_rate_limits": { 00:09:05.099 "r_mbytes_per_sec": 0, 00:09:05.099 "rw_ios_per_sec": 0, 00:09:05.099 "rw_mbytes_per_sec": 0, 00:09:05.099 "w_mbytes_per_sec": 0 00:09:05.099 }, 00:09:05.099 "block_size": 512, 00:09:05.099 "claimed": false, 00:09:05.099 "driver_specific": {}, 00:09:05.100 "memory_domains": [ 00:09:05.100 { 00:09:05.100 "dma_device_id": "system", 00:09:05.100 "dma_device_type": 1 00:09:05.100 }, 00:09:05.100 { 00:09:05.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.100 "dma_device_type": 2 00:09:05.100 } 00:09:05.100 ], 00:09:05.100 "name": "Malloc3", 00:09:05.100 "num_blocks": 16384, 00:09:05.100 "product_name": "Malloc disk", 00:09:05.100 "supported_io_types": { 00:09:05.100 "abort": true, 00:09:05.100 "compare": false, 00:09:05.100 "compare_and_write": false, 00:09:05.100 "copy": true, 00:09:05.100 "flush": true, 00:09:05.100 "get_zone_info": false, 00:09:05.100 "nvme_admin": false, 00:09:05.100 "nvme_io": false, 00:09:05.100 "nvme_io_md": false, 00:09:05.100 "nvme_iov_md": false, 00:09:05.100 "read": true, 00:09:05.100 "reset": true, 00:09:05.100 "seek_data": false, 00:09:05.100 "seek_hole": false, 00:09:05.100 "unmap": true, 00:09:05.100 "write": true, 00:09:05.100 "write_zeroes": true, 00:09:05.100 "zcopy": true, 00:09:05.100 "zone_append": false, 00:09:05.100 "zone_management": false 00:09:05.100 }, 00:09:05.100 "uuid": "7ea0fe66-2156-412d-a872-cce4aa6a21f4", 00:09:05.100 "zoned": false 00:09:05.100 } 00:09:05.100 ]' 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.100 [2024-07-24 17:09:51.237225] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:09:05.100 [2024-07-24 17:09:51.237277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.100 [2024-07-24 17:09:51.237301] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe2dbd0 00:09:05.100 [2024-07-24 17:09:51.237311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.100 [2024-07-24 17:09:51.238868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.100 [2024-07-24 17:09:51.238899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:05.100 Passthru0 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.100 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:05.100 { 00:09:05.100 "aliases": [ 00:09:05.100 "7ea0fe66-2156-412d-a872-cce4aa6a21f4" 00:09:05.100 ], 00:09:05.100 "assigned_rate_limits": { 00:09:05.100 "r_mbytes_per_sec": 0, 00:09:05.100 "rw_ios_per_sec": 0, 00:09:05.100 "rw_mbytes_per_sec": 0, 00:09:05.100 "w_mbytes_per_sec": 0 00:09:05.100 }, 00:09:05.100 "block_size": 512, 00:09:05.100 "claim_type": "exclusive_write", 00:09:05.100 "claimed": true, 00:09:05.100 "driver_specific": {}, 00:09:05.100 "memory_domains": [ 00:09:05.100 { 00:09:05.100 "dma_device_id": "system", 00:09:05.100 "dma_device_type": 1 00:09:05.100 }, 00:09:05.100 { 00:09:05.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.100 "dma_device_type": 2 00:09:05.100 } 00:09:05.100 ], 00:09:05.100 "name": "Malloc3", 00:09:05.100 "num_blocks": 16384, 00:09:05.100 "product_name": "Malloc disk", 00:09:05.100 "supported_io_types": { 00:09:05.100 "abort": true, 00:09:05.100 "compare": false, 00:09:05.100 "compare_and_write": false, 00:09:05.100 "copy": true, 00:09:05.100 "flush": true, 00:09:05.100 "get_zone_info": false, 00:09:05.100 "nvme_admin": false, 00:09:05.100 "nvme_io": false, 00:09:05.100 "nvme_io_md": false, 00:09:05.100 "nvme_iov_md": false, 00:09:05.100 "read": true, 00:09:05.100 "reset": true, 00:09:05.100 "seek_data": false, 00:09:05.100 "seek_hole": false, 00:09:05.100 "unmap": true, 00:09:05.100 "write": true, 00:09:05.100 "write_zeroes": true, 00:09:05.100 "zcopy": true, 00:09:05.100 "zone_append": false, 00:09:05.100 "zone_management": false 00:09:05.100 }, 00:09:05.100 "uuid": "7ea0fe66-2156-412d-a872-cce4aa6a21f4", 00:09:05.100 "zoned": false 00:09:05.100 }, 00:09:05.100 { 00:09:05.100 "aliases": [ 00:09:05.100 "2a0ebf37-2bae-5ba9-8d69-886931e9066e" 00:09:05.100 ], 00:09:05.100 "assigned_rate_limits": { 00:09:05.100 "r_mbytes_per_sec": 0, 00:09:05.100 "rw_ios_per_sec": 0, 00:09:05.100 "rw_mbytes_per_sec": 0, 00:09:05.100 "w_mbytes_per_sec": 0 00:09:05.100 }, 00:09:05.100 "block_size": 512, 00:09:05.100 "claimed": false, 00:09:05.101 "driver_specific": { 00:09:05.101 "passthru": { 00:09:05.101 "base_bdev_name": "Malloc3", 00:09:05.101 "name": "Passthru0" 00:09:05.101 } 00:09:05.101 }, 00:09:05.101 "memory_domains": [ 00:09:05.101 { 00:09:05.101 "dma_device_id": "system", 00:09:05.101 "dma_device_type": 1 00:09:05.101 }, 00:09:05.101 { 00:09:05.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.101 "dma_device_type": 2 00:09:05.101 } 00:09:05.101 ], 00:09:05.101 "name": "Passthru0", 00:09:05.101 "num_blocks": 16384, 00:09:05.101 "product_name": "passthru", 00:09:05.101 "supported_io_types": { 00:09:05.101 "abort": true, 00:09:05.101 "compare": false, 00:09:05.101 "compare_and_write": false, 00:09:05.101 "copy": true, 00:09:05.101 "flush": true, 00:09:05.101 "get_zone_info": false, 00:09:05.101 "nvme_admin": false, 00:09:05.101 "nvme_io": false, 00:09:05.101 "nvme_io_md": false, 00:09:05.101 "nvme_iov_md": false, 00:09:05.101 "read": true, 00:09:05.101 "reset": true, 00:09:05.101 "seek_data": false, 00:09:05.101 "seek_hole": false, 00:09:05.101 "unmap": true, 00:09:05.101 "write": true, 00:09:05.101 "write_zeroes": true, 00:09:05.101 "zcopy": true, 00:09:05.101 "zone_append": false, 00:09:05.101 "zone_management": false 00:09:05.101 }, 00:09:05.101 "uuid": "2a0ebf37-2bae-5ba9-8d69-886931e9066e", 00:09:05.101 "zoned": false 00:09:05.101 } 00:09:05.101 ]' 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.101 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:05.360 00:09:05.360 real 0m0.314s 00:09:05.360 user 0m0.204s 00:09:05.360 sys 0m0.042s 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:05.360 ************************************ 00:09:05.360 END TEST rpc_daemon_integrity 00:09:05.360 17:09:51 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:05.360 ************************************ 00:09:05.360 17:09:51 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:05.360 17:09:51 rpc -- rpc/rpc.sh@84 -- # killprocess 60670 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@950 -- # '[' -z 60670 ']' 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@954 -- # kill -0 60670 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@955 -- # uname 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60670 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:05.360 killing process with pid 60670 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60670' 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@969 -- # kill 60670 00:09:05.360 17:09:51 rpc -- common/autotest_common.sh@974 -- # wait 60670 00:09:06.015 00:09:06.015 real 0m3.102s 00:09:06.015 user 0m4.092s 00:09:06.015 sys 0m0.743s 00:09:06.015 17:09:51 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.015 17:09:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.015 ************************************ 00:09:06.015 END TEST rpc 00:09:06.015 ************************************ 00:09:06.015 17:09:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:06.015 17:09:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.015 17:09:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.015 17:09:51 -- common/autotest_common.sh@10 -- # set +x 00:09:06.015 ************************************ 00:09:06.015 START TEST skip_rpc 00:09:06.015 ************************************ 00:09:06.015 17:09:51 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:09:06.015 * Looking for test storage... 00:09:06.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:09:06.015 17:09:52 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:06.015 17:09:52 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:06.015 17:09:52 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:06.015 17:09:52 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.015 17:09:52 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.015 17:09:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.015 ************************************ 00:09:06.015 START TEST skip_rpc 00:09:06.015 ************************************ 00:09:06.015 17:09:52 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:09:06.015 17:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60931 00:09:06.015 17:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:06.015 17:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:06.015 17:09:52 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:06.015 [2024-07-24 17:09:52.090322] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:06.015 [2024-07-24 17:09:52.090421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60931 ] 00:09:06.015 [2024-07-24 17:09:52.224329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.274 [2024-07-24 17:09:52.344897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:11.611 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.612 2024/07/24 17:09:57 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60931 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 60931 ']' 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 60931 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60931 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:11.612 killing process with pid 60931 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60931' 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 60931 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 60931 00:09:11.612 00:09:11.612 real 0m5.438s 00:09:11.612 user 0m5.038s 00:09:11.612 sys 0m0.300s 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.612 ************************************ 00:09:11.612 END TEST skip_rpc 00:09:11.612 ************************************ 00:09:11.612 17:09:57 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.612 17:09:57 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:11.612 17:09:57 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.612 17:09:57 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.612 17:09:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:11.612 ************************************ 00:09:11.612 START TEST skip_rpc_with_json 00:09:11.612 ************************************ 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61029 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 61029 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 61029 ']' 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:11.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:11.612 17:09:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:11.612 [2024-07-24 17:09:57.573602] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:11.612 [2024-07-24 17:09:57.573700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61029 ] 00:09:11.612 [2024-07-24 17:09:57.709764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.612 [2024-07-24 17:09:57.832684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.565 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.565 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:09:12.565 17:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:12.566 [2024-07-24 17:09:58.580446] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:12.566 2024/07/24 17:09:58 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:09:12.566 request: 00:09:12.566 { 00:09:12.566 "method": "nvmf_get_transports", 00:09:12.566 "params": { 00:09:12.566 "trtype": "tcp" 00:09:12.566 } 00:09:12.566 } 00:09:12.566 Got JSON-RPC error response 00:09:12.566 GoRPCClient: error on JSON-RPC call 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:12.566 [2024-07-24 17:09:58.588535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.566 17:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:12.566 { 00:09:12.566 "subsystems": [ 00:09:12.566 { 00:09:12.566 "subsystem": "keyring", 00:09:12.566 "config": [] 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "iobuf", 00:09:12.566 "config": [ 00:09:12.566 { 00:09:12.566 "method": "iobuf_set_options", 00:09:12.566 "params": { 00:09:12.566 "large_bufsize": 135168, 00:09:12.566 "large_pool_count": 1024, 00:09:12.566 "small_bufsize": 8192, 00:09:12.566 "small_pool_count": 8192 00:09:12.566 } 00:09:12.566 } 00:09:12.566 ] 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "sock", 00:09:12.566 "config": [ 00:09:12.566 { 00:09:12.566 "method": "sock_set_default_impl", 00:09:12.566 "params": { 00:09:12.566 "impl_name": "posix" 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "sock_impl_set_options", 00:09:12.566 "params": { 00:09:12.566 "enable_ktls": false, 00:09:12.566 "enable_placement_id": 0, 00:09:12.566 "enable_quickack": false, 00:09:12.566 "enable_recv_pipe": true, 00:09:12.566 "enable_zerocopy_send_client": false, 00:09:12.566 "enable_zerocopy_send_server": true, 00:09:12.566 "impl_name": "ssl", 00:09:12.566 "recv_buf_size": 4096, 00:09:12.566 "send_buf_size": 4096, 00:09:12.566 "tls_version": 0, 00:09:12.566 "zerocopy_threshold": 0 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "sock_impl_set_options", 00:09:12.566 "params": { 00:09:12.566 "enable_ktls": false, 00:09:12.566 "enable_placement_id": 0, 00:09:12.566 "enable_quickack": false, 00:09:12.566 "enable_recv_pipe": true, 00:09:12.566 "enable_zerocopy_send_client": false, 00:09:12.566 "enable_zerocopy_send_server": true, 00:09:12.566 "impl_name": "posix", 00:09:12.566 "recv_buf_size": 2097152, 00:09:12.566 "send_buf_size": 2097152, 00:09:12.566 "tls_version": 0, 00:09:12.566 "zerocopy_threshold": 0 00:09:12.566 } 00:09:12.566 } 00:09:12.566 ] 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "vmd", 00:09:12.566 "config": [] 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "accel", 00:09:12.566 "config": [ 00:09:12.566 { 00:09:12.566 "method": "accel_set_options", 00:09:12.566 "params": { 00:09:12.566 "buf_count": 2048, 00:09:12.566 "large_cache_size": 16, 00:09:12.566 "sequence_count": 2048, 00:09:12.566 "small_cache_size": 128, 00:09:12.566 "task_count": 2048 00:09:12.566 } 00:09:12.566 } 00:09:12.566 ] 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "bdev", 00:09:12.566 "config": [ 00:09:12.566 { 00:09:12.566 "method": "bdev_set_options", 00:09:12.566 "params": { 00:09:12.566 "bdev_auto_examine": true, 00:09:12.566 "bdev_io_cache_size": 256, 00:09:12.566 "bdev_io_pool_size": 65535, 00:09:12.566 "iobuf_large_cache_size": 16, 00:09:12.566 "iobuf_small_cache_size": 128 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "bdev_raid_set_options", 00:09:12.566 "params": { 00:09:12.566 "process_max_bandwidth_mb_sec": 0, 00:09:12.566 "process_window_size_kb": 1024 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "bdev_iscsi_set_options", 00:09:12.566 "params": { 00:09:12.566 "timeout_sec": 30 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "bdev_nvme_set_options", 00:09:12.566 "params": { 00:09:12.566 "action_on_timeout": "none", 00:09:12.566 "allow_accel_sequence": false, 00:09:12.566 "arbitration_burst": 0, 00:09:12.566 "bdev_retry_count": 3, 00:09:12.566 "ctrlr_loss_timeout_sec": 0, 00:09:12.566 "delay_cmd_submit": true, 00:09:12.566 "dhchap_dhgroups": [ 00:09:12.566 "null", 00:09:12.566 "ffdhe2048", 00:09:12.566 "ffdhe3072", 00:09:12.566 "ffdhe4096", 00:09:12.566 "ffdhe6144", 00:09:12.566 "ffdhe8192" 00:09:12.566 ], 00:09:12.566 "dhchap_digests": [ 00:09:12.566 "sha256", 00:09:12.566 "sha384", 00:09:12.566 "sha512" 00:09:12.566 ], 00:09:12.566 "disable_auto_failback": false, 00:09:12.566 "fast_io_fail_timeout_sec": 0, 00:09:12.566 "generate_uuids": false, 00:09:12.566 "high_priority_weight": 0, 00:09:12.566 "io_path_stat": false, 00:09:12.566 "io_queue_requests": 0, 00:09:12.566 "keep_alive_timeout_ms": 10000, 00:09:12.566 "low_priority_weight": 0, 00:09:12.566 "medium_priority_weight": 0, 00:09:12.566 "nvme_adminq_poll_period_us": 10000, 00:09:12.566 "nvme_error_stat": false, 00:09:12.566 "nvme_ioq_poll_period_us": 0, 00:09:12.566 "rdma_cm_event_timeout_ms": 0, 00:09:12.566 "rdma_max_cq_size": 0, 00:09:12.566 "rdma_srq_size": 0, 00:09:12.566 "reconnect_delay_sec": 0, 00:09:12.566 "timeout_admin_us": 0, 00:09:12.566 "timeout_us": 0, 00:09:12.566 "transport_ack_timeout": 0, 00:09:12.566 "transport_retry_count": 4, 00:09:12.566 "transport_tos": 0 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "bdev_nvme_set_hotplug", 00:09:12.566 "params": { 00:09:12.566 "enable": false, 00:09:12.566 "period_us": 100000 00:09:12.566 } 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "method": "bdev_wait_for_examine" 00:09:12.566 } 00:09:12.566 ] 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "scsi", 00:09:12.566 "config": null 00:09:12.566 }, 00:09:12.566 { 00:09:12.566 "subsystem": "scheduler", 00:09:12.566 "config": [ 00:09:12.566 { 00:09:12.566 "method": "framework_set_scheduler", 00:09:12.566 "params": { 00:09:12.567 "name": "static" 00:09:12.567 } 00:09:12.567 } 00:09:12.567 ] 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "subsystem": "vhost_scsi", 00:09:12.567 "config": [] 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "subsystem": "vhost_blk", 00:09:12.567 "config": [] 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "subsystem": "ublk", 00:09:12.567 "config": [] 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "subsystem": "nbd", 00:09:12.567 "config": [] 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "subsystem": "nvmf", 00:09:12.567 "config": [ 00:09:12.567 { 00:09:12.567 "method": "nvmf_set_config", 00:09:12.567 "params": { 00:09:12.567 "admin_cmd_passthru": { 00:09:12.567 "identify_ctrlr": false 00:09:12.567 }, 00:09:12.567 "discovery_filter": "match_any" 00:09:12.567 } 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "method": "nvmf_set_max_subsystems", 00:09:12.567 "params": { 00:09:12.567 "max_subsystems": 1024 00:09:12.567 } 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "method": "nvmf_set_crdt", 00:09:12.567 "params": { 00:09:12.567 "crdt1": 0, 00:09:12.567 "crdt2": 0, 00:09:12.567 "crdt3": 0 00:09:12.567 } 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "method": "nvmf_create_transport", 00:09:12.567 "params": { 00:09:12.567 "abort_timeout_sec": 1, 00:09:12.567 "ack_timeout": 0, 00:09:12.567 "buf_cache_size": 4294967295, 00:09:12.567 "c2h_success": true, 00:09:12.567 "data_wr_pool_size": 0, 00:09:12.567 "dif_insert_or_strip": false, 00:09:12.567 "in_capsule_data_size": 4096, 00:09:12.567 "io_unit_size": 131072, 00:09:12.567 "max_aq_depth": 128, 00:09:12.567 "max_io_qpairs_per_ctrlr": 127, 00:09:12.567 "max_io_size": 131072, 00:09:12.567 "max_queue_depth": 128, 00:09:12.567 "num_shared_buffers": 511, 00:09:12.567 "sock_priority": 0, 00:09:12.567 "trtype": "TCP", 00:09:12.567 "zcopy": false 00:09:12.567 } 00:09:12.567 } 00:09:12.567 ] 00:09:12.567 }, 00:09:12.567 { 00:09:12.567 "subsystem": "iscsi", 00:09:12.567 "config": [ 00:09:12.567 { 00:09:12.567 "method": "iscsi_set_options", 00:09:12.567 "params": { 00:09:12.567 "allow_duplicated_isid": false, 00:09:12.567 "chap_group": 0, 00:09:12.567 "data_out_pool_size": 2048, 00:09:12.567 "default_time2retain": 20, 00:09:12.567 "default_time2wait": 2, 00:09:12.567 "disable_chap": false, 00:09:12.567 "error_recovery_level": 0, 00:09:12.567 "first_burst_length": 8192, 00:09:12.567 "immediate_data": true, 00:09:12.567 "immediate_data_pool_size": 16384, 00:09:12.567 "max_connections_per_session": 2, 00:09:12.567 "max_large_datain_per_connection": 64, 00:09:12.567 "max_queue_depth": 64, 00:09:12.567 "max_r2t_per_connection": 4, 00:09:12.567 "max_sessions": 128, 00:09:12.567 "mutual_chap": false, 00:09:12.567 "node_base": "iqn.2016-06.io.spdk", 00:09:12.567 "nop_in_interval": 30, 00:09:12.567 "nop_timeout": 60, 00:09:12.567 "pdu_pool_size": 36864, 00:09:12.567 "require_chap": false 00:09:12.567 } 00:09:12.567 } 00:09:12.567 ] 00:09:12.567 } 00:09:12.567 ] 00:09:12.567 } 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 61029 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 61029 ']' 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 61029 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61029 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:12.567 killing process with pid 61029 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61029' 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 61029 00:09:12.567 17:09:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 61029 00:09:13.135 17:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61063 00:09:13.135 17:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:13.135 17:09:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:18.407 17:10:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 61063 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 61063 ']' 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 61063 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61063 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:18.408 killing process with pid 61063 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61063' 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 61063 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 61063 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:09:18.408 00:09:18.408 real 0m7.094s 00:09:18.408 user 0m6.842s 00:09:18.408 sys 0m0.663s 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.408 17:10:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:18.408 ************************************ 00:09:18.408 END TEST skip_rpc_with_json 00:09:18.408 ************************************ 00:09:18.670 17:10:04 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:18.670 17:10:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.670 17:10:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.670 17:10:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.670 ************************************ 00:09:18.670 START TEST skip_rpc_with_delay 00:09:18.670 ************************************ 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:18.670 [2024-07-24 17:10:04.712520] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:18.670 [2024-07-24 17:10:04.712656] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:18.670 00:09:18.670 real 0m0.088s 00:09:18.670 user 0m0.055s 00:09:18.670 sys 0m0.032s 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.670 17:10:04 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:18.670 ************************************ 00:09:18.670 END TEST skip_rpc_with_delay 00:09:18.670 ************************************ 00:09:18.670 17:10:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:18.670 17:10:04 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:18.670 17:10:04 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:18.670 17:10:04 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:18.670 17:10:04 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.670 17:10:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.670 ************************************ 00:09:18.670 START TEST exit_on_failed_rpc_init 00:09:18.670 ************************************ 00:09:18.670 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:09:18.670 17:10:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61178 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61178 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 61178 ']' 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:18.671 17:10:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:18.671 [2024-07-24 17:10:04.858914] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:18.671 [2024-07-24 17:10:04.859018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61178 ] 00:09:19.016 [2024-07-24 17:10:04.996264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.016 [2024-07-24 17:10:05.125617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:09:19.951 17:10:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:09:19.951 [2024-07-24 17:10:05.938009] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:19.951 [2024-07-24 17:10:05.938133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61208 ] 00:09:19.951 [2024-07-24 17:10:06.079139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.210 [2024-07-24 17:10:06.209713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.210 [2024-07-24 17:10:06.209847] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:20.210 [2024-07-24 17:10:06.209865] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:20.210 [2024-07-24 17:10:06.209876] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61178 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 61178 ']' 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 61178 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61178 00:09:20.210 killing process with pid 61178 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61178' 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 61178 00:09:20.210 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 61178 00:09:20.777 00:09:20.777 real 0m1.972s 00:09:20.777 user 0m2.360s 00:09:20.777 sys 0m0.446s 00:09:20.777 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.777 ************************************ 00:09:20.777 END TEST exit_on_failed_rpc_init 00:09:20.777 ************************************ 00:09:20.777 17:10:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:20.777 17:10:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:09:20.777 00:09:20.777 real 0m14.878s 00:09:20.777 user 0m14.396s 00:09:20.777 sys 0m1.617s 00:09:20.777 ************************************ 00:09:20.777 END TEST skip_rpc 00:09:20.777 ************************************ 00:09:20.777 17:10:06 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.777 17:10:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:20.777 17:10:06 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:20.777 17:10:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.777 17:10:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.777 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:20.777 ************************************ 00:09:20.777 START TEST rpc_client 00:09:20.777 ************************************ 00:09:20.777 17:10:06 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:09:20.777 * Looking for test storage... 00:09:20.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:09:20.777 17:10:06 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:09:20.777 OK 00:09:20.777 17:10:06 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:20.777 ************************************ 00:09:20.777 END TEST rpc_client 00:09:20.777 ************************************ 00:09:20.777 00:09:20.777 real 0m0.101s 00:09:20.777 user 0m0.045s 00:09:20.777 sys 0m0.063s 00:09:20.777 17:10:06 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:20.777 17:10:06 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:20.777 17:10:06 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:20.777 17:10:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.777 17:10:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.777 17:10:06 -- common/autotest_common.sh@10 -- # set +x 00:09:20.777 ************************************ 00:09:20.777 START TEST json_config 00:09:20.777 ************************************ 00:09:20.777 17:10:07 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.037 17:10:07 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.037 17:10:07 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.037 17:10:07 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.037 17:10:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.037 17:10:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.037 17:10:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.037 17:10:07 json_config -- paths/export.sh@5 -- # export PATH 00:09:21.037 17:10:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@47 -- # : 0 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:21.037 17:10:07 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:21.037 17:10:07 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:09:21.038 INFO: JSON configuration test init 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.038 Waiting for target to run... 00:09:21.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:21.038 17:10:07 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:09:21.038 17:10:07 json_config -- json_config/common.sh@9 -- # local app=target 00:09:21.038 17:10:07 json_config -- json_config/common.sh@10 -- # shift 00:09:21.038 17:10:07 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:21.038 17:10:07 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:21.038 17:10:07 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:21.038 17:10:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.038 17:10:07 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.038 17:10:07 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61326 00:09:21.038 17:10:07 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:21.038 17:10:07 json_config -- json_config/common.sh@25 -- # waitforlisten 61326 /var/tmp/spdk_tgt.sock 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@831 -- # '[' -z 61326 ']' 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:21.038 17:10:07 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.038 17:10:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.038 [2024-07-24 17:10:07.163147] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:21.038 [2024-07-24 17:10:07.163420] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61326 ] 00:09:21.605 [2024-07-24 17:10:07.576381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.605 [2024-07-24 17:10:07.675744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.173 17:10:08 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.173 17:10:08 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:22.173 17:10:08 json_config -- json_config/common.sh@26 -- # echo '' 00:09:22.173 00:09:22.173 17:10:08 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:09:22.173 17:10:08 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:09:22.173 17:10:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.173 17:10:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.173 17:10:08 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:09:22.173 17:10:08 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:09:22.173 17:10:08 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:22.173 17:10:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.173 17:10:08 json_config -- json_config/json_config.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:22.173 17:10:08 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:09:22.173 17:10:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:22.741 17:10:08 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:22.741 17:10:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:09:22.741 17:10:08 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:09:22.741 17:10:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@48 -- # local get_types 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@51 -- # sort 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:09:23.000 17:10:08 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:09:23.000 17:10:09 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:23.000 17:10:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@59 -- # return 0 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:09:23.000 17:10:09 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.000 17:10:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:09:23.000 17:10:09 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:23.000 17:10:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:23.288 MallocForNvmf0 00:09:23.288 17:10:09 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:23.288 17:10:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:23.547 MallocForNvmf1 00:09:23.547 17:10:09 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:23.547 17:10:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:23.806 [2024-07-24 17:10:09.829455] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.806 17:10:09 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.806 17:10:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.064 17:10:10 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:24.064 17:10:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:24.323 17:10:10 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:24.323 17:10:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:24.581 17:10:10 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:24.581 17:10:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:24.581 [2024-07-24 17:10:10.777964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:24.581 17:10:10 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:09:24.581 17:10:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.581 17:10:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.839 17:10:10 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:09:24.839 17:10:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.839 17:10:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:24.839 17:10:10 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:09:24.839 17:10:10 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:24.840 17:10:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:25.098 MallocBdevForConfigChangeCheck 00:09:25.098 17:10:11 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:09:25.098 17:10:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:25.098 17:10:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:25.098 17:10:11 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:09:25.098 17:10:11 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:25.356 INFO: shutting down applications... 00:09:25.356 17:10:11 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:09:25.356 17:10:11 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:09:25.357 17:10:11 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:09:25.357 17:10:11 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:09:25.357 17:10:11 json_config -- json_config/json_config.sh@337 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:25.924 Calling clear_iscsi_subsystem 00:09:25.924 Calling clear_nvmf_subsystem 00:09:25.924 Calling clear_nbd_subsystem 00:09:25.924 Calling clear_ublk_subsystem 00:09:25.924 Calling clear_vhost_blk_subsystem 00:09:25.924 Calling clear_vhost_scsi_subsystem 00:09:25.924 Calling clear_bdev_subsystem 00:09:25.924 17:10:11 json_config -- json_config/json_config.sh@341 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:09:25.924 17:10:11 json_config -- json_config/json_config.sh@347 -- # count=100 00:09:25.924 17:10:11 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:09:25.924 17:10:11 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:25.924 17:10:11 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:09:25.924 17:10:11 json_config -- json_config/json_config.sh@349 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:26.182 17:10:12 json_config -- json_config/json_config.sh@349 -- # break 00:09:26.182 17:10:12 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:09:26.182 17:10:12 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:09:26.182 17:10:12 json_config -- json_config/common.sh@31 -- # local app=target 00:09:26.182 17:10:12 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:26.182 17:10:12 json_config -- json_config/common.sh@35 -- # [[ -n 61326 ]] 00:09:26.182 17:10:12 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61326 00:09:26.182 17:10:12 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:26.182 17:10:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:26.182 17:10:12 json_config -- json_config/common.sh@41 -- # kill -0 61326 00:09:26.182 17:10:12 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:26.751 17:10:12 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:26.751 17:10:12 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:26.751 17:10:12 json_config -- json_config/common.sh@41 -- # kill -0 61326 00:09:26.751 17:10:12 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:26.751 17:10:12 json_config -- json_config/common.sh@43 -- # break 00:09:26.751 17:10:12 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:26.751 SPDK target shutdown done 00:09:26.751 17:10:12 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:26.751 INFO: relaunching applications... 00:09:26.751 17:10:12 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:09:26.751 17:10:12 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:26.751 17:10:12 json_config -- json_config/common.sh@9 -- # local app=target 00:09:26.751 17:10:12 json_config -- json_config/common.sh@10 -- # shift 00:09:26.751 17:10:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:26.751 17:10:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:26.751 17:10:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:26.751 17:10:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:26.751 17:10:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:26.751 17:10:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61606 00:09:26.751 17:10:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:26.751 Waiting for target to run... 00:09:26.751 17:10:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:26.751 17:10:12 json_config -- json_config/common.sh@25 -- # waitforlisten 61606 /var/tmp/spdk_tgt.sock 00:09:26.751 17:10:12 json_config -- common/autotest_common.sh@831 -- # '[' -z 61606 ']' 00:09:26.751 17:10:12 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:26.751 17:10:12 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:26.751 17:10:12 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:26.751 17:10:12 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.751 17:10:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:26.751 [2024-07-24 17:10:12.889457] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:26.751 [2024-07-24 17:10:12.889566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61606 ] 00:09:27.337 [2024-07-24 17:10:13.314694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.337 [2024-07-24 17:10:13.437645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.596 [2024-07-24 17:10:13.766508] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:27.596 [2024-07-24 17:10:13.798616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:27.855 17:10:13 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:27.855 17:10:13 json_config -- common/autotest_common.sh@864 -- # return 0 00:09:27.855 00:09:27.855 17:10:13 json_config -- json_config/common.sh@26 -- # echo '' 00:09:27.855 17:10:13 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:09:27.855 INFO: Checking if target configuration is the same... 00:09:27.855 17:10:13 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:27.855 17:10:13 json_config -- json_config/json_config.sh@382 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:27.855 17:10:13 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:09:27.855 17:10:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:27.855 + '[' 2 -ne 2 ']' 00:09:27.855 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:27.855 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:27.855 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:27.855 +++ basename /dev/fd/62 00:09:27.855 ++ mktemp /tmp/62.XXX 00:09:27.855 + tmp_file_1=/tmp/62.QZm 00:09:27.855 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:27.855 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:27.855 + tmp_file_2=/tmp/spdk_tgt_config.json.fqD 00:09:27.855 + ret=0 00:09:27.855 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:28.113 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:28.372 + diff -u /tmp/62.QZm /tmp/spdk_tgt_config.json.fqD 00:09:28.372 + echo 'INFO: JSON config files are the same' 00:09:28.372 INFO: JSON config files are the same 00:09:28.372 + rm /tmp/62.QZm /tmp/spdk_tgt_config.json.fqD 00:09:28.372 + exit 0 00:09:28.372 17:10:14 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:09:28.372 INFO: changing configuration and checking if this can be detected... 00:09:28.372 17:10:14 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:28.372 17:10:14 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:28.372 17:10:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:28.630 17:10:14 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:09:28.630 17:10:14 json_config -- json_config/json_config.sh@391 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.630 17:10:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:28.631 + '[' 2 -ne 2 ']' 00:09:28.631 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:09:28.631 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:09:28.631 + rootdir=/home/vagrant/spdk_repo/spdk 00:09:28.631 +++ basename /dev/fd/62 00:09:28.631 ++ mktemp /tmp/62.XXX 00:09:28.631 + tmp_file_1=/tmp/62.EhX 00:09:28.631 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:28.631 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:28.631 + tmp_file_2=/tmp/spdk_tgt_config.json.EKz 00:09:28.631 + ret=0 00:09:28.631 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:28.889 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:09:28.889 + diff -u /tmp/62.EhX /tmp/spdk_tgt_config.json.EKz 00:09:28.889 + ret=1 00:09:28.889 + echo '=== Start of file: /tmp/62.EhX ===' 00:09:28.889 + cat /tmp/62.EhX 00:09:28.889 + echo '=== End of file: /tmp/62.EhX ===' 00:09:28.889 + echo '' 00:09:28.889 + echo '=== Start of file: /tmp/spdk_tgt_config.json.EKz ===' 00:09:28.889 + cat /tmp/spdk_tgt_config.json.EKz 00:09:28.889 + echo '=== End of file: /tmp/spdk_tgt_config.json.EKz ===' 00:09:28.889 + echo '' 00:09:28.889 + rm /tmp/62.EhX /tmp/spdk_tgt_config.json.EKz 00:09:28.889 + exit 1 00:09:28.889 INFO: configuration change detected. 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:09:28.889 17:10:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.889 17:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@321 -- # [[ -n 61606 ]] 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:09:28.889 17:10:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.889 17:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@197 -- # uname -s 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:09:28.889 17:10:15 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:09:28.889 17:10:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:28.889 17:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.147 17:10:15 json_config -- json_config/json_config.sh@327 -- # killprocess 61606 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@950 -- # '[' -z 61606 ']' 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@954 -- # kill -0 61606 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@955 -- # uname 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61606 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:29.147 killing process with pid 61606 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61606' 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@969 -- # kill 61606 00:09:29.147 17:10:15 json_config -- common/autotest_common.sh@974 -- # wait 61606 00:09:29.406 17:10:15 json_config -- json_config/json_config.sh@330 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:09:29.407 17:10:15 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:09:29.407 17:10:15 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.407 17:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.407 17:10:15 json_config -- json_config/json_config.sh@332 -- # return 0 00:09:29.407 INFO: Success 00:09:29.407 17:10:15 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:09:29.407 ************************************ 00:09:29.407 END TEST json_config 00:09:29.407 ************************************ 00:09:29.407 00:09:29.407 real 0m8.474s 00:09:29.407 user 0m12.121s 00:09:29.407 sys 0m1.875s 00:09:29.407 17:10:15 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:29.407 17:10:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:29.407 17:10:15 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:29.407 17:10:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:29.407 17:10:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.407 17:10:15 -- common/autotest_common.sh@10 -- # set +x 00:09:29.407 ************************************ 00:09:29.407 START TEST json_config_extra_key 00:09:29.407 ************************************ 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:29.407 17:10:15 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.407 17:10:15 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.407 17:10:15 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.407 17:10:15 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.407 17:10:15 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.407 17:10:15 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.407 17:10:15 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:29.407 17:10:15 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.407 17:10:15 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:29.407 INFO: launching applications... 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:29.407 17:10:15 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61776 00:09:29.407 Waiting for target to run... 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61776 /var/tmp/spdk_tgt.sock 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 61776 ']' 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:29.407 17:10:15 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:29.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:29.407 17:10:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:29.666 [2024-07-24 17:10:15.676079] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:29.666 [2024-07-24 17:10:15.676201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61776 ] 00:09:29.925 [2024-07-24 17:10:16.109287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.183 [2024-07-24 17:10:16.213448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.782 17:10:16 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:30.782 00:09:30.782 17:10:16 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:30.782 INFO: shutting down applications... 00:09:30.782 17:10:16 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:30.782 17:10:16 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61776 ]] 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61776 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61776 00:09:30.782 17:10:16 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61776 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:31.040 SPDK target shutdown done 00:09:31.040 17:10:17 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:31.040 Success 00:09:31.040 17:10:17 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:31.040 00:09:31.040 real 0m1.678s 00:09:31.040 user 0m1.607s 00:09:31.040 sys 0m0.459s 00:09:31.040 17:10:17 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:31.040 ************************************ 00:09:31.040 END TEST json_config_extra_key 00:09:31.040 17:10:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:31.040 ************************************ 00:09:31.040 17:10:17 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:31.040 17:10:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:31.040 17:10:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:31.040 17:10:17 -- common/autotest_common.sh@10 -- # set +x 00:09:31.040 ************************************ 00:09:31.040 START TEST alias_rpc 00:09:31.040 ************************************ 00:09:31.040 17:10:17 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:31.299 * Looking for test storage... 00:09:31.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:09:31.299 17:10:17 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:31.299 17:10:17 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61858 00:09:31.299 17:10:17 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61858 00:09:31.299 17:10:17 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:31.299 17:10:17 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 61858 ']' 00:09:31.299 17:10:17 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.299 17:10:17 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.299 17:10:17 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.300 17:10:17 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.300 17:10:17 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.300 [2024-07-24 17:10:17.423324] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:31.300 [2024-07-24 17:10:17.424042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61858 ] 00:09:31.558 [2024-07-24 17:10:17.568745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.558 [2024-07-24 17:10:17.687858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.494 17:10:18 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.494 17:10:18 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:32.494 17:10:18 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:09:32.751 17:10:18 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61858 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 61858 ']' 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 61858 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61858 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.751 killing process with pid 61858 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61858' 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@969 -- # kill 61858 00:09:32.751 17:10:18 alias_rpc -- common/autotest_common.sh@974 -- # wait 61858 00:09:33.010 00:09:33.010 real 0m1.931s 00:09:33.010 user 0m2.248s 00:09:33.010 sys 0m0.473s 00:09:33.010 17:10:19 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.010 17:10:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:33.010 ************************************ 00:09:33.010 END TEST alias_rpc 00:09:33.010 ************************************ 00:09:33.010 17:10:19 -- spdk/autotest.sh@176 -- # [[ 1 -eq 0 ]] 00:09:33.010 17:10:19 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.010 17:10:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:33.010 17:10:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.010 17:10:19 -- common/autotest_common.sh@10 -- # set +x 00:09:33.010 ************************************ 00:09:33.010 START TEST dpdk_mem_utility 00:09:33.010 ************************************ 00:09:33.010 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:33.268 * Looking for test storage... 00:09:33.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:09:33.268 17:10:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:33.268 17:10:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61950 00:09:33.268 17:10:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61950 00:09:33.268 17:10:19 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:09:33.268 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 61950 ']' 00:09:33.268 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.268 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.268 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.268 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.268 17:10:19 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:33.268 [2024-07-24 17:10:19.386823] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:33.268 [2024-07-24 17:10:19.386931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61950 ] 00:09:33.527 [2024-07-24 17:10:19.525314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.527 [2024-07-24 17:10:19.647131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.500 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.500 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:09:34.500 17:10:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:34.500 17:10:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:34.500 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.500 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:34.500 { 00:09:34.500 "filename": "/tmp/spdk_mem_dump.txt" 00:09:34.500 } 00:09:34.500 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.500 17:10:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:09:34.500 DPDK memory size 814.000000 MiB in 1 heap(s) 00:09:34.500 1 heaps totaling size 814.000000 MiB 00:09:34.500 size: 814.000000 MiB heap id: 0 00:09:34.500 end heaps---------- 00:09:34.500 8 mempools totaling size 598.116089 MiB 00:09:34.500 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:34.500 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:34.500 size: 84.521057 MiB name: bdev_io_61950 00:09:34.500 size: 51.011292 MiB name: evtpool_61950 00:09:34.500 size: 50.003479 MiB name: msgpool_61950 00:09:34.500 size: 21.763794 MiB name: PDU_Pool 00:09:34.500 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:34.500 size: 0.026123 MiB name: Session_Pool 00:09:34.500 end mempools------- 00:09:34.500 6 memzones totaling size 4.142822 MiB 00:09:34.500 size: 1.000366 MiB name: RG_ring_0_61950 00:09:34.500 size: 1.000366 MiB name: RG_ring_1_61950 00:09:34.500 size: 1.000366 MiB name: RG_ring_4_61950 00:09:34.500 size: 1.000366 MiB name: RG_ring_5_61950 00:09:34.500 size: 0.125366 MiB name: RG_ring_2_61950 00:09:34.500 size: 0.015991 MiB name: RG_ring_3_61950 00:09:34.500 end memzones------- 00:09:34.500 17:10:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:09:34.500 heap id: 0 total size: 814.000000 MiB number of busy elements: 215 number of free elements: 15 00:09:34.500 list of free elements. size: 12.487488 MiB 00:09:34.500 element at address: 0x200000400000 with size: 1.999512 MiB 00:09:34.500 element at address: 0x200018e00000 with size: 0.999878 MiB 00:09:34.500 element at address: 0x200019000000 with size: 0.999878 MiB 00:09:34.500 element at address: 0x200003e00000 with size: 0.996277 MiB 00:09:34.500 element at address: 0x200031c00000 with size: 0.994446 MiB 00:09:34.500 element at address: 0x200013800000 with size: 0.978699 MiB 00:09:34.500 element at address: 0x200007000000 with size: 0.959839 MiB 00:09:34.500 element at address: 0x200019200000 with size: 0.936584 MiB 00:09:34.500 element at address: 0x200000200000 with size: 0.837036 MiB 00:09:34.500 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:09:34.500 element at address: 0x20000b200000 with size: 0.489990 MiB 00:09:34.500 element at address: 0x200000800000 with size: 0.487061 MiB 00:09:34.500 element at address: 0x200019400000 with size: 0.485657 MiB 00:09:34.500 element at address: 0x200027e00000 with size: 0.398132 MiB 00:09:34.500 element at address: 0x200003a00000 with size: 0.351685 MiB 00:09:34.500 list of standard malloc elements. size: 199.249939 MiB 00:09:34.500 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:09:34.500 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:09:34.500 element at address: 0x200018efff80 with size: 1.000122 MiB 00:09:34.500 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:09:34.500 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:34.500 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:34.500 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:09:34.500 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:34.500 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:09:34.500 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:09:34.500 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003adb300 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003adb500 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003affa80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003affb40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:09:34.501 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:09:34.501 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:09:34.502 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:09:34.502 list of memzone associated elements. size: 602.262573 MiB 00:09:34.502 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:09:34.502 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:34.502 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:09:34.502 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:34.502 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:09:34.502 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61950_0 00:09:34.502 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:09:34.502 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61950_0 00:09:34.502 element at address: 0x200003fff380 with size: 48.003052 MiB 00:09:34.502 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61950_0 00:09:34.502 element at address: 0x2000195be940 with size: 20.255554 MiB 00:09:34.502 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:34.502 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:09:34.502 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:34.502 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:09:34.502 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61950 00:09:34.502 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:09:34.502 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61950 00:09:34.502 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:34.502 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61950 00:09:34.502 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:09:34.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:34.502 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:09:34.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:34.502 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:09:34.502 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:34.502 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:09:34.502 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:34.502 element at address: 0x200003eff180 with size: 1.000488 MiB 00:09:34.502 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61950 00:09:34.502 element at address: 0x200003affc00 with size: 1.000488 MiB 00:09:34.502 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61950 00:09:34.502 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:09:34.502 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61950 00:09:34.502 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:09:34.502 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61950 00:09:34.502 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:09:34.502 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61950 00:09:34.502 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:09:34.502 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:34.502 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:09:34.502 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:34.502 element at address: 0x20001947c540 with size: 0.250488 MiB 00:09:34.502 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:34.502 element at address: 0x200003adf880 with size: 0.125488 MiB 00:09:34.502 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61950 00:09:34.502 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:09:34.502 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:34.502 element at address: 0x200027e66040 with size: 0.023743 MiB 00:09:34.502 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:34.502 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:09:34.502 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61950 00:09:34.502 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:09:34.502 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:34.502 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:09:34.502 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61950 00:09:34.502 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:09:34.502 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61950 00:09:34.502 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:09:34.502 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:34.502 17:10:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:34.502 17:10:20 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61950 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 61950 ']' 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 61950 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61950 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61950' 00:09:34.502 killing process with pid 61950 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 61950 00:09:34.502 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 61950 00:09:34.760 00:09:34.760 real 0m1.724s 00:09:34.760 user 0m1.878s 00:09:34.760 sys 0m0.449s 00:09:34.760 ************************************ 00:09:34.760 END TEST dpdk_mem_utility 00:09:34.760 ************************************ 00:09:34.760 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:34.760 17:10:20 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 17:10:21 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:35.018 17:10:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:35.018 17:10:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.018 17:10:21 -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 ************************************ 00:09:35.018 START TEST event 00:09:35.018 ************************************ 00:09:35.018 17:10:21 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:09:35.018 * Looking for test storage... 00:09:35.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:09:35.018 17:10:21 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:35.018 17:10:21 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:35.018 17:10:21 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:35.018 17:10:21 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:09:35.018 17:10:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.018 17:10:21 event -- common/autotest_common.sh@10 -- # set +x 00:09:35.018 ************************************ 00:09:35.018 START TEST event_perf 00:09:35.018 ************************************ 00:09:35.018 17:10:21 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:35.018 Running I/O for 1 seconds...[2024-07-24 17:10:21.125347] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:35.018 [2024-07-24 17:10:21.125476] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62040 ] 00:09:35.275 [2024-07-24 17:10:21.270459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:35.275 [2024-07-24 17:10:21.394956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:35.275 [2024-07-24 17:10:21.395049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:35.275 [2024-07-24 17:10:21.395154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.275 Running I/O for 1 seconds...[2024-07-24 17:10:21.395154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:36.651 00:09:36.651 lcore 0: 192128 00:09:36.651 lcore 1: 192127 00:09:36.651 lcore 2: 192128 00:09:36.651 lcore 3: 192127 00:09:36.651 done. 00:09:36.651 00:09:36.651 real 0m1.391s 00:09:36.651 user 0m4.193s 00:09:36.651 sys 0m0.074s 00:09:36.651 17:10:22 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:36.651 17:10:22 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:36.651 ************************************ 00:09:36.651 END TEST event_perf 00:09:36.651 ************************************ 00:09:36.651 17:10:22 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:36.651 17:10:22 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:36.651 17:10:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:36.651 17:10:22 event -- common/autotest_common.sh@10 -- # set +x 00:09:36.651 ************************************ 00:09:36.651 START TEST event_reactor 00:09:36.651 ************************************ 00:09:36.651 17:10:22 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:09:36.651 [2024-07-24 17:10:22.560880] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:36.651 [2024-07-24 17:10:22.560974] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62078 ] 00:09:36.651 [2024-07-24 17:10:22.692707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.651 [2024-07-24 17:10:22.811216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.036 test_start 00:09:38.036 oneshot 00:09:38.036 tick 100 00:09:38.036 tick 100 00:09:38.036 tick 250 00:09:38.036 tick 100 00:09:38.036 tick 100 00:09:38.036 tick 100 00:09:38.036 tick 250 00:09:38.036 tick 500 00:09:38.036 tick 100 00:09:38.036 tick 100 00:09:38.036 tick 250 00:09:38.036 tick 100 00:09:38.036 tick 100 00:09:38.036 test_end 00:09:38.036 ************************************ 00:09:38.036 END TEST event_reactor 00:09:38.036 ************************************ 00:09:38.036 00:09:38.036 real 0m1.361s 00:09:38.036 user 0m1.201s 00:09:38.036 sys 0m0.053s 00:09:38.036 17:10:23 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.036 17:10:23 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:38.036 17:10:23 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:38.036 17:10:23 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:38.036 17:10:23 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.036 17:10:23 event -- common/autotest_common.sh@10 -- # set +x 00:09:38.036 ************************************ 00:09:38.036 START TEST event_reactor_perf 00:09:38.036 ************************************ 00:09:38.036 17:10:23 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:38.036 [2024-07-24 17:10:23.973073] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:38.036 [2024-07-24 17:10:23.973201] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62114 ] 00:09:38.036 [2024-07-24 17:10:24.112248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.036 [2024-07-24 17:10:24.231527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.413 test_start 00:09:39.413 test_end 00:09:39.413 Performance: 370025 events per second 00:09:39.413 00:09:39.413 real 0m1.371s 00:09:39.413 user 0m1.207s 00:09:39.413 sys 0m0.058s 00:09:39.413 17:10:25 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:39.413 17:10:25 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 ************************************ 00:09:39.413 END TEST event_reactor_perf 00:09:39.413 ************************************ 00:09:39.413 17:10:25 event -- event/event.sh@49 -- # uname -s 00:09:39.413 17:10:25 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:39.413 17:10:25 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:39.413 17:10:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:39.413 17:10:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:39.413 17:10:25 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 ************************************ 00:09:39.413 START TEST event_scheduler 00:09:39.413 ************************************ 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:09:39.413 * Looking for test storage... 00:09:39.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:09:39.413 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:39.413 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62175 00:09:39.413 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.413 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:39.413 17:10:25 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62175 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 62175 ']' 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.413 17:10:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.413 [2024-07-24 17:10:25.514789] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:39.413 [2024-07-24 17:10:25.515115] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62175 ] 00:09:39.413 [2024-07-24 17:10:25.645436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.675 [2024-07-24 17:10:25.767246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.675 [2024-07-24 17:10:25.767380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.675 [2024-07-24 17:10:25.767470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.675 [2024-07-24 17:10:25.767715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:09:40.610 17:10:26 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.610 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.610 POWER: Cannot set governor of lcore 0 to userspace 00:09:40.610 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.610 POWER: Cannot set governor of lcore 0 to performance 00:09:40.610 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.610 POWER: Cannot set governor of lcore 0 to userspace 00:09:40.610 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:09:40.610 POWER: Cannot set governor of lcore 0 to userspace 00:09:40.610 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:09:40.610 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:09:40.610 POWER: Unable to set Power Management Environment for lcore 0 00:09:40.610 [2024-07-24 17:10:26.536822] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:09:40.610 [2024-07-24 17:10:26.536838] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:09:40.610 [2024-07-24 17:10:26.536846] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:09:40.610 [2024-07-24 17:10:26.536859] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:40.610 [2024-07-24 17:10:26.536867] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:40.610 [2024-07-24 17:10:26.536874] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.610 17:10:26 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.610 [2024-07-24 17:10:26.638820] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.610 17:10:26 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.610 17:10:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:40.610 ************************************ 00:09:40.610 START TEST scheduler_create_thread 00:09:40.610 ************************************ 00:09:40.610 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:09:40.610 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:40.610 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.610 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.610 2 00:09:40.610 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.610 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 3 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 4 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 5 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 6 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 7 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 8 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 9 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 10 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.611 17:10:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:41.987 17:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.987 17:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:41.987 17:10:28 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:41.987 17:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.987 17:10:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.362 ************************************ 00:09:43.362 END TEST scheduler_create_thread 00:09:43.362 ************************************ 00:09:43.362 17:10:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.362 00:09:43.362 real 0m2.616s 00:09:43.362 user 0m0.017s 00:09:43.362 sys 0m0.008s 00:09:43.362 17:10:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.362 17:10:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:43.362 17:10:29 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:43.362 17:10:29 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62175 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 62175 ']' 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 62175 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62175 00:09:43.362 killing process with pid 62175 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62175' 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 62175 00:09:43.362 17:10:29 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 62175 00:09:43.621 [2024-07-24 17:10:29.747510] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:43.881 00:09:43.881 real 0m4.612s 00:09:43.881 user 0m8.816s 00:09:43.881 sys 0m0.346s 00:09:43.881 17:10:29 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.881 ************************************ 00:09:43.881 END TEST event_scheduler 00:09:43.881 ************************************ 00:09:43.881 17:10:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:43.881 17:10:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:43.881 17:10:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:43.881 17:10:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:43.881 17:10:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.881 17:10:30 event -- common/autotest_common.sh@10 -- # set +x 00:09:43.881 ************************************ 00:09:43.881 START TEST app_repeat 00:09:43.881 ************************************ 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:43.881 Process app_repeat pid: 62293 00:09:43.881 spdk_app_start Round 0 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62293 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62293' 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:43.881 17:10:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62293 /var/tmp/spdk-nbd.sock 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62293 ']' 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.881 17:10:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:43.881 [2024-07-24 17:10:30.086215] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:09:43.881 [2024-07-24 17:10:30.086332] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62293 ] 00:09:44.140 [2024-07-24 17:10:30.225209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:44.140 [2024-07-24 17:10:30.350688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.140 [2024-07-24 17:10:30.350700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.075 17:10:31 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.075 17:10:31 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:45.075 17:10:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.333 Malloc0 00:09:45.333 17:10:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.591 Malloc1 00:09:45.591 17:10:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.591 17:10:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.592 17:10:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:45.849 /dev/nbd0 00:09:45.849 17:10:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:45.849 17:10:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:45.849 17:10:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:45.849 17:10:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:45.849 17:10:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:45.849 17:10:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:45.849 17:10:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:45.849 17:10:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.850 1+0 records in 00:09:45.850 1+0 records out 00:09:45.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605052 s, 6.8 MB/s 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:45.850 17:10:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:45.850 17:10:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.850 17:10:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.850 17:10:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:46.416 /dev/nbd1 00:09:46.416 17:10:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:46.416 17:10:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:46.416 1+0 records in 00:09:46.416 1+0 records out 00:09:46.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409808 s, 10.0 MB/s 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:46.416 17:10:32 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:46.416 17:10:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.416 17:10:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.416 17:10:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.416 17:10:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.417 17:10:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.675 17:10:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:46.675 { 00:09:46.675 "bdev_name": "Malloc0", 00:09:46.676 "nbd_device": "/dev/nbd0" 00:09:46.676 }, 00:09:46.676 { 00:09:46.676 "bdev_name": "Malloc1", 00:09:46.676 "nbd_device": "/dev/nbd1" 00:09:46.676 } 00:09:46.676 ]' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:46.676 { 00:09:46.676 "bdev_name": "Malloc0", 00:09:46.676 "nbd_device": "/dev/nbd0" 00:09:46.676 }, 00:09:46.676 { 00:09:46.676 "bdev_name": "Malloc1", 00:09:46.676 "nbd_device": "/dev/nbd1" 00:09:46.676 } 00:09:46.676 ]' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:46.676 /dev/nbd1' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:46.676 /dev/nbd1' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:46.676 256+0 records in 00:09:46.676 256+0 records out 00:09:46.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00811818 s, 129 MB/s 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:46.676 256+0 records in 00:09:46.676 256+0 records out 00:09:46.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251655 s, 41.7 MB/s 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:46.676 256+0 records in 00:09:46.676 256+0 records out 00:09:46.676 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02871 s, 36.5 MB/s 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.676 17:10:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.934 17:10:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.192 17:10:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.449 17:10:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:47.449 17:10:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:47.449 17:10:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:47.707 17:10:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:47.707 17:10:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:47.995 17:10:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:47.995 [2024-07-24 17:10:34.181363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:48.253 [2024-07-24 17:10:34.298843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.253 [2024-07-24 17:10:34.298869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.253 [2024-07-24 17:10:34.355273] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:48.253 [2024-07-24 17:10:34.355340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:50.778 17:10:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:50.779 spdk_app_start Round 1 00:09:50.779 17:10:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:50.779 17:10:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62293 /var/tmp/spdk-nbd.sock 00:09:50.779 17:10:36 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62293 ']' 00:09:50.779 17:10:36 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.779 17:10:36 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:50.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.779 17:10:36 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.779 17:10:36 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:50.779 17:10:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:51.036 17:10:37 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:51.036 17:10:37 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:51.037 17:10:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.294 Malloc0 00:09:51.294 17:10:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:51.551 Malloc1 00:09:51.552 17:10:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.552 17:10:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:51.809 /dev/nbd0 00:09:51.809 17:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.809 17:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.809 1+0 records in 00:09:51.809 1+0 records out 00:09:51.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329561 s, 12.4 MB/s 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:51.809 17:10:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:51.809 17:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.809 17:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.809 17:10:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:52.067 /dev/nbd1 00:09:52.067 17:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:52.067 17:10:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:52.067 17:10:38 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:52.067 17:10:38 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:52.067 17:10:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:52.067 17:10:38 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:52.067 17:10:38 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:52.325 1+0 records in 00:09:52.325 1+0 records out 00:09:52.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310069 s, 13.2 MB/s 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:52.325 17:10:38 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:52.325 17:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.325 17:10:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:52.325 17:10:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.325 17:10:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.325 17:10:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:52.583 { 00:09:52.583 "bdev_name": "Malloc0", 00:09:52.583 "nbd_device": "/dev/nbd0" 00:09:52.583 }, 00:09:52.583 { 00:09:52.583 "bdev_name": "Malloc1", 00:09:52.583 "nbd_device": "/dev/nbd1" 00:09:52.583 } 00:09:52.583 ]' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:52.583 { 00:09:52.583 "bdev_name": "Malloc0", 00:09:52.583 "nbd_device": "/dev/nbd0" 00:09:52.583 }, 00:09:52.583 { 00:09:52.583 "bdev_name": "Malloc1", 00:09:52.583 "nbd_device": "/dev/nbd1" 00:09:52.583 } 00:09:52.583 ]' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:52.583 /dev/nbd1' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:52.583 /dev/nbd1' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:52.583 256+0 records in 00:09:52.583 256+0 records out 00:09:52.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00905326 s, 116 MB/s 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.583 17:10:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:52.583 256+0 records in 00:09:52.583 256+0 records out 00:09:52.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249186 s, 42.1 MB/s 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:52.584 256+0 records in 00:09:52.584 256+0 records out 00:09:52.584 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267004 s, 39.3 MB/s 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.584 17:10:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:52.842 17:10:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.101 17:10:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:53.667 17:10:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:53.667 17:10:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:53.925 17:10:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:54.184 [2024-07-24 17:10:40.171789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:54.184 [2024-07-24 17:10:40.286654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.184 [2024-07-24 17:10:40.286663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.184 [2024-07-24 17:10:40.342795] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:54.184 [2024-07-24 17:10:40.342859] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:57.470 17:10:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:57.470 spdk_app_start Round 2 00:09:57.470 17:10:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:57.470 17:10:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62293 /var/tmp/spdk-nbd.sock 00:09:57.470 17:10:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62293 ']' 00:09:57.470 17:10:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:57.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:57.470 17:10:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:57.470 17:10:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:57.470 17:10:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:57.470 17:10:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:57.470 17:10:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:57.470 17:10:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:09:57.470 17:10:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.470 Malloc0 00:09:57.470 17:10:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:57.728 Malloc1 00:09:57.728 17:10:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.728 17:10:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:57.986 /dev/nbd0 00:09:57.986 17:10:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:57.986 17:10:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:57.986 1+0 records in 00:09:57.986 1+0 records out 00:09:57.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409733 s, 10.0 MB/s 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:57.986 17:10:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:57.986 17:10:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:57.986 17:10:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:57.986 17:10:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:58.251 /dev/nbd1 00:09:58.251 17:10:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:58.252 17:10:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:58.252 1+0 records in 00:09:58.252 1+0 records out 00:09:58.252 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355834 s, 11.5 MB/s 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:09:58.252 17:10:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:09:58.252 17:10:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:58.252 17:10:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:58.252 17:10:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:58.252 17:10:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.252 17:10:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:58.527 17:10:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:58.527 { 00:09:58.527 "bdev_name": "Malloc0", 00:09:58.527 "nbd_device": "/dev/nbd0" 00:09:58.527 }, 00:09:58.527 { 00:09:58.527 "bdev_name": "Malloc1", 00:09:58.527 "nbd_device": "/dev/nbd1" 00:09:58.527 } 00:09:58.527 ]' 00:09:58.527 17:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:58.527 { 00:09:58.527 "bdev_name": "Malloc0", 00:09:58.527 "nbd_device": "/dev/nbd0" 00:09:58.527 }, 00:09:58.527 { 00:09:58.527 "bdev_name": "Malloc1", 00:09:58.527 "nbd_device": "/dev/nbd1" 00:09:58.527 } 00:09:58.527 ]' 00:09:58.527 17:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:58.785 /dev/nbd1' 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:58.785 /dev/nbd1' 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:58.785 256+0 records in 00:09:58.785 256+0 records out 00:09:58.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00939914 s, 112 MB/s 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:58.785 256+0 records in 00:09:58.785 256+0 records out 00:09:58.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267039 s, 39.3 MB/s 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:58.785 256+0 records in 00:09:58.785 256+0 records out 00:09:58.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.028039 s, 37.4 MB/s 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:58.785 17:10:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.786 17:10:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.044 17:10:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.303 17:10:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.561 17:10:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.561 17:10:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:59.821 17:10:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:10:00.079 [2024-07-24 17:10:46.227062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.337 [2024-07-24 17:10:46.350641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.337 [2024-07-24 17:10:46.350647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.337 [2024-07-24 17:10:46.407148] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:10:00.337 [2024-07-24 17:10:46.407226] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:10:02.867 17:10:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62293 /var/tmp/spdk-nbd.sock 00:10:02.867 17:10:49 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 62293 ']' 00:10:02.867 17:10:49 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:02.868 17:10:49 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:02.868 17:10:49 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:02.868 17:10:49 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.868 17:10:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:10:03.127 17:10:49 event.app_repeat -- event/event.sh@39 -- # killprocess 62293 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 62293 ']' 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 62293 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62293 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.127 killing process with pid 62293 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62293' 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@969 -- # kill 62293 00:10:03.127 17:10:49 event.app_repeat -- common/autotest_common.sh@974 -- # wait 62293 00:10:03.386 spdk_app_start is called in Round 0. 00:10:03.386 Shutdown signal received, stop current app iteration 00:10:03.386 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:10:03.386 spdk_app_start is called in Round 1. 00:10:03.386 Shutdown signal received, stop current app iteration 00:10:03.386 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:10:03.386 spdk_app_start is called in Round 2. 00:10:03.386 Shutdown signal received, stop current app iteration 00:10:03.386 Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 reinitialization... 00:10:03.386 spdk_app_start is called in Round 3. 00:10:03.386 Shutdown signal received, stop current app iteration 00:10:03.386 17:10:49 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:10:03.386 17:10:49 event.app_repeat -- event/event.sh@42 -- # return 0 00:10:03.386 00:10:03.386 real 0m19.491s 00:10:03.386 user 0m43.608s 00:10:03.386 sys 0m3.156s 00:10:03.386 17:10:49 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:03.386 17:10:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:10:03.386 ************************************ 00:10:03.386 END TEST app_repeat 00:10:03.386 ************************************ 00:10:03.386 17:10:49 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:10:03.386 17:10:49 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:03.386 17:10:49 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:03.386 17:10:49 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.386 17:10:49 event -- common/autotest_common.sh@10 -- # set +x 00:10:03.386 ************************************ 00:10:03.386 START TEST cpu_locks 00:10:03.386 ************************************ 00:10:03.386 17:10:49 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:10:03.644 * Looking for test storage... 00:10:03.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:10:03.644 17:10:49 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:10:03.644 17:10:49 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:10:03.644 17:10:49 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:10:03.644 17:10:49 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:10:03.644 17:10:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:03.644 17:10:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:03.644 17:10:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.644 ************************************ 00:10:03.644 START TEST default_locks 00:10:03.644 ************************************ 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62923 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62923 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62923 ']' 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:03.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:03.644 17:10:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:03.644 [2024-07-24 17:10:49.749065] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:03.644 [2024-07-24 17:10:49.749200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62923 ] 00:10:03.902 [2024-07-24 17:10:49.887425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.902 [2024-07-24 17:10:50.021270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.837 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:04.837 17:10:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:10:04.837 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62923 00:10:04.837 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62923 00:10:04.837 17:10:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62923 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 62923 ']' 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 62923 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62923 00:10:05.096 killing process with pid 62923 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62923' 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 62923 00:10:05.096 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 62923 00:10:05.354 17:10:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62923 00:10:05.354 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:10:05.354 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 62923 00:10:05.354 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:05.354 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 62923 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 62923 ']' 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.355 ERROR: process (pid: 62923) is no longer running 00:10:05.355 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (62923) - No such process 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:05.355 00:10:05.355 real 0m1.852s 00:10:05.355 user 0m1.964s 00:10:05.355 sys 0m0.553s 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.355 ************************************ 00:10:05.355 END TEST default_locks 00:10:05.355 ************************************ 00:10:05.355 17:10:51 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.355 17:10:51 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:10:05.613 17:10:51 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.613 17:10:51 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.613 17:10:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.613 ************************************ 00:10:05.613 START TEST default_locks_via_rpc 00:10:05.613 ************************************ 00:10:05.613 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:10:05.613 17:10:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62987 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62987 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 62987 ']' 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:05.614 17:10:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.614 [2024-07-24 17:10:51.667838] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:05.614 [2024-07-24 17:10:51.668178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62987 ] 00:10:05.614 [2024-07-24 17:10:51.802964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.873 [2024-07-24 17:10:51.951994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.439 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:06.439 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:06.439 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:10:06.439 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.439 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62987 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:06.697 17:10:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62987 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62987 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 62987 ']' 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 62987 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62987 00:10:06.956 killing process with pid 62987 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62987' 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 62987 00:10:06.956 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 62987 00:10:07.524 00:10:07.524 real 0m1.979s 00:10:07.524 user 0m2.137s 00:10:07.524 sys 0m0.599s 00:10:07.524 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.524 ************************************ 00:10:07.524 END TEST default_locks_via_rpc 00:10:07.524 ************************************ 00:10:07.524 17:10:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:07.524 17:10:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:10:07.524 17:10:53 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:07.524 17:10:53 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.524 17:10:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:07.524 ************************************ 00:10:07.524 START TEST non_locking_app_on_locked_coremask 00:10:07.524 ************************************ 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63056 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 63056 /var/tmp/spdk.sock 00:10:07.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63056 ']' 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:07.524 17:10:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.524 [2024-07-24 17:10:53.698087] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:07.524 [2024-07-24 17:10:53.698525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63056 ] 00:10:07.782 [2024-07-24 17:10:53.837538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.782 [2024-07-24 17:10:53.963518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:08.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63084 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 63084 /var/tmp/spdk2.sock 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63084 ']' 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:08.716 17:10:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.716 [2024-07-24 17:10:54.789203] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:08.716 [2024-07-24 17:10:54.789500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63084 ] 00:10:08.716 [2024-07-24 17:10:54.931042] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:08.716 [2024-07-24 17:10:54.931111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.974 [2024-07-24 17:10:55.187292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.908 17:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:09.908 17:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:09.908 17:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 63056 00:10:09.908 17:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63056 00:10:09.908 17:10:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 63056 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63056 ']' 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63056 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63056 00:10:10.475 killing process with pid 63056 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63056' 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63056 00:10:10.475 17:10:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63056 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 63084 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63084 ']' 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63084 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63084 00:10:11.411 killing process with pid 63084 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63084' 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63084 00:10:11.411 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63084 00:10:11.979 ************************************ 00:10:11.979 END TEST non_locking_app_on_locked_coremask 00:10:11.979 ************************************ 00:10:11.979 00:10:11.979 real 0m4.323s 00:10:11.979 user 0m4.797s 00:10:11.979 sys 0m1.217s 00:10:11.979 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.979 17:10:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.979 17:10:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:11.979 17:10:57 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:11.979 17:10:57 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.979 17:10:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.979 ************************************ 00:10:11.980 START TEST locking_app_on_unlocked_coremask 00:10:11.980 ************************************ 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63163 00:10:11.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63163 /var/tmp/spdk.sock 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63163 ']' 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:11.980 17:10:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:11.980 [2024-07-24 17:10:58.067321] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:11.980 [2024-07-24 17:10:58.067428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63163 ] 00:10:11.980 [2024-07-24 17:10:58.203161] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:11.980 [2024-07-24 17:10:58.203240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.238 [2024-07-24 17:10:58.344850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63191 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63191 /var/tmp/spdk2.sock 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63191 ']' 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:13.190 17:10:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:13.190 [2024-07-24 17:10:59.215135] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:13.190 [2024-07-24 17:10:59.216005] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63191 ] 00:10:13.190 [2024-07-24 17:10:59.359622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.449 [2024-07-24 17:10:59.619275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.017 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.017 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:14.017 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63191 00:10:14.017 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63191 00:10:14.017 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63163 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63163 ']' 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63163 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63163 00:10:14.954 killing process with pid 63163 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63163' 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63163 00:10:14.954 17:11:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63163 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63191 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63191 ']' 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 63191 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63191 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:15.890 killing process with pid 63191 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63191' 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 63191 00:10:15.890 17:11:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 63191 00:10:16.150 00:10:16.150 real 0m4.209s 00:10:16.150 user 0m4.702s 00:10:16.150 sys 0m1.155s 00:10:16.150 ************************************ 00:10:16.150 END TEST locking_app_on_unlocked_coremask 00:10:16.150 ************************************ 00:10:16.150 17:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:16.150 17:11:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.150 17:11:02 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:16.150 17:11:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:16.150 17:11:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:16.150 17:11:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:16.150 ************************************ 00:10:16.150 START TEST locking_app_on_locked_coremask 00:10:16.150 ************************************ 00:10:16.150 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63270 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63270 /var/tmp/spdk.sock 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63270 ']' 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.151 17:11:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:16.151 [2024-07-24 17:11:02.354524] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:16.151 [2024-07-24 17:11:02.354635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63270 ] 00:10:16.412 [2024-07-24 17:11:02.492928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.412 [2024-07-24 17:11:02.614093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.345 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.345 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:17.345 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63298 00:10:17.345 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63298 /var/tmp/spdk2.sock 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63298 /var/tmp/spdk2.sock 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63298 /var/tmp/spdk2.sock 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 63298 ']' 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:17.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:17.346 17:11:03 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:17.346 [2024-07-24 17:11:03.393579] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:17.346 [2024-07-24 17:11:03.393691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63298 ] 00:10:17.346 [2024-07-24 17:11:03.537581] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63270 has claimed it. 00:10:17.346 [2024-07-24 17:11:03.537668] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:17.911 ERROR: process (pid: 63298) is no longer running 00:10:17.911 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63298) - No such process 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63270 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63270 00:10:17.911 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63270 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 63270 ']' 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 63270 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63270 00:10:18.479 killing process with pid 63270 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63270' 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 63270 00:10:18.479 17:11:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 63270 00:10:19.047 00:10:19.047 real 0m2.732s 00:10:19.047 user 0m3.121s 00:10:19.047 sys 0m0.732s 00:10:19.047 17:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.047 ************************************ 00:10:19.047 END TEST locking_app_on_locked_coremask 00:10:19.047 ************************************ 00:10:19.047 17:11:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.047 17:11:05 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:19.047 17:11:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:19.047 17:11:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.047 17:11:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:19.047 ************************************ 00:10:19.047 START TEST locking_overlapped_coremask 00:10:19.047 ************************************ 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63355 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63355 /var/tmp/spdk.sock 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63355 ']' 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.047 17:11:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:19.047 [2024-07-24 17:11:05.126226] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:19.047 [2024-07-24 17:11:05.126351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63355 ] 00:10:19.047 [2024-07-24 17:11:05.267918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.306 [2024-07-24 17:11:05.400378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.306 [2024-07-24 17:11:05.400551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.306 [2024-07-24 17:11:05.400561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63385 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63385 /var/tmp/spdk2.sock 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 63385 /var/tmp/spdk2.sock 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 63385 /var/tmp/spdk2.sock 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 63385 ']' 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.240 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:20.240 [2024-07-24 17:11:06.191957] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:20.240 [2024-07-24 17:11:06.192068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63385 ] 00:10:20.240 [2024-07-24 17:11:06.335929] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63355 has claimed it. 00:10:20.240 [2024-07-24 17:11:06.336016] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:20.805 ERROR: process (pid: 63385) is no longer running 00:10:20.805 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (63385) - No such process 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63355 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 63355 ']' 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 63355 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63355 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:20.805 killing process with pid 63355 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63355' 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 63355 00:10:20.805 17:11:06 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 63355 00:10:21.370 00:10:21.370 real 0m2.250s 00:10:21.370 user 0m6.148s 00:10:21.370 sys 0m0.459s 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:21.370 ************************************ 00:10:21.370 END TEST locking_overlapped_coremask 00:10:21.370 ************************************ 00:10:21.370 17:11:07 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:21.370 17:11:07 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:21.370 17:11:07 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:21.370 17:11:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:21.370 ************************************ 00:10:21.370 START TEST locking_overlapped_coremask_via_rpc 00:10:21.370 ************************************ 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63431 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63431 /var/tmp/spdk.sock 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63431 ']' 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.370 17:11:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.370 [2024-07-24 17:11:07.417594] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:21.370 [2024-07-24 17:11:07.417714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63431 ] 00:10:21.370 [2024-07-24 17:11:07.553076] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:21.370 [2024-07-24 17:11:07.553156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.628 [2024-07-24 17:11:07.678217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.628 [2024-07-24 17:11:07.678302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.628 [2024-07-24 17:11:07.678308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63461 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63461 /var/tmp/spdk2.sock 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63461 ']' 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.257 17:11:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:22.515 [2024-07-24 17:11:08.494948] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:22.515 [2024-07-24 17:11:08.495580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63461 ] 00:10:22.515 [2024-07-24 17:11:08.641763] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:22.515 [2024-07-24 17:11:08.641831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:22.771 [2024-07-24 17:11:08.890119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.771 [2024-07-24 17:11:08.893155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:22.771 [2024-07-24 17:11:08.893158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:23.336 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.594 [2024-07-24 17:11:09.577294] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63431 has claimed it. 00:10:23.594 2024/07/24 17:11:09 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:10:23.594 request: 00:10:23.594 { 00:10:23.594 "method": "framework_enable_cpumask_locks", 00:10:23.594 "params": {} 00:10:23.594 } 00:10:23.594 Got JSON-RPC error response 00:10:23.594 GoRPCClient: error on JSON-RPC call 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63431 /var/tmp/spdk.sock 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63431 ']' 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.594 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:23.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63461 /var/tmp/spdk2.sock 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 63461 ']' 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:23.851 17:11:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:24.109 00:10:24.109 real 0m2.865s 00:10:24.109 user 0m1.561s 00:10:24.109 sys 0m0.237s 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.109 17:11:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.109 ************************************ 00:10:24.109 END TEST locking_overlapped_coremask_via_rpc 00:10:24.109 ************************************ 00:10:24.109 17:11:10 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:24.109 17:11:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63431 ]] 00:10:24.109 17:11:10 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63431 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63431 ']' 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63431 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63431 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:24.109 killing process with pid 63431 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63431' 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63431 00:10:24.109 17:11:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63431 00:10:24.675 17:11:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63461 ]] 00:10:24.675 17:11:10 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63461 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63461 ']' 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63461 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63461 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:10:24.675 killing process with pid 63461 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63461' 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 63461 00:10:24.675 17:11:10 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 63461 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63431 ]] 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63431 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63431 ']' 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63431 00:10:24.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63431) - No such process 00:10:24.933 Process with pid 63431 is not found 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63431 is not found' 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63461 ]] 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63461 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 63461 ']' 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 63461 00:10:24.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (63461) - No such process 00:10:24.933 Process with pid 63461 is not found 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 63461 is not found' 00:10:24.933 17:11:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:24.933 00:10:24.933 real 0m21.547s 00:10:24.933 user 0m37.898s 00:10:24.933 sys 0m5.811s 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:24.933 ************************************ 00:10:24.933 END TEST cpu_locks 00:10:24.933 ************************************ 00:10:24.933 17:11:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:25.233 00:10:25.233 real 0m50.170s 00:10:25.233 user 1m37.060s 00:10:25.233 sys 0m9.730s 00:10:25.233 17:11:11 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:25.233 17:11:11 event -- common/autotest_common.sh@10 -- # set +x 00:10:25.233 ************************************ 00:10:25.233 END TEST event 00:10:25.233 ************************************ 00:10:25.233 17:11:11 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:25.233 17:11:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:25.233 17:11:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.233 17:11:11 -- common/autotest_common.sh@10 -- # set +x 00:10:25.233 ************************************ 00:10:25.233 START TEST thread 00:10:25.233 ************************************ 00:10:25.233 17:11:11 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:10:25.233 * Looking for test storage... 00:10:25.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:10:25.233 17:11:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:25.233 17:11:11 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:25.233 17:11:11 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:25.233 17:11:11 thread -- common/autotest_common.sh@10 -- # set +x 00:10:25.233 ************************************ 00:10:25.233 START TEST thread_poller_perf 00:10:25.233 ************************************ 00:10:25.233 17:11:11 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:25.233 [2024-07-24 17:11:11.337048] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:25.233 [2024-07-24 17:11:11.337166] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63613 ] 00:10:25.492 [2024-07-24 17:11:11.475617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.492 [2024-07-24 17:11:11.594236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.492 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:26.867 ====================================== 00:10:26.867 busy:2206358771 (cyc) 00:10:26.867 total_run_count: 322000 00:10:26.867 tsc_hz: 2200000000 (cyc) 00:10:26.867 ====================================== 00:10:26.867 poller_cost: 6852 (cyc), 3114 (nsec) 00:10:26.867 00:10:26.867 real 0m1.372s 00:10:26.867 user 0m1.213s 00:10:26.867 sys 0m0.052s 00:10:26.867 17:11:12 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.867 17:11:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:26.867 ************************************ 00:10:26.867 END TEST thread_poller_perf 00:10:26.867 ************************************ 00:10:26.867 17:11:12 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:26.867 17:11:12 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:10:26.867 17:11:12 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.867 17:11:12 thread -- common/autotest_common.sh@10 -- # set +x 00:10:26.867 ************************************ 00:10:26.867 START TEST thread_poller_perf 00:10:26.867 ************************************ 00:10:26.867 17:11:12 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:26.867 [2024-07-24 17:11:12.761919] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:26.868 [2024-07-24 17:11:12.762027] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63643 ] 00:10:26.868 [2024-07-24 17:11:12.895261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.868 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:26.868 [2024-07-24 17:11:13.016888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.242 ====================================== 00:10:28.242 busy:2202189100 (cyc) 00:10:28.242 total_run_count: 4030000 00:10:28.242 tsc_hz: 2200000000 (cyc) 00:10:28.242 ====================================== 00:10:28.242 poller_cost: 546 (cyc), 248 (nsec) 00:10:28.242 00:10:28.242 real 0m1.364s 00:10:28.242 user 0m1.198s 00:10:28.242 sys 0m0.058s 00:10:28.242 17:11:14 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.242 17:11:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:28.242 ************************************ 00:10:28.242 END TEST thread_poller_perf 00:10:28.242 ************************************ 00:10:28.242 17:11:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:28.242 00:10:28.242 real 0m2.921s 00:10:28.242 user 0m2.480s 00:10:28.242 sys 0m0.218s 00:10:28.242 17:11:14 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.242 17:11:14 thread -- common/autotest_common.sh@10 -- # set +x 00:10:28.242 ************************************ 00:10:28.242 END TEST thread 00:10:28.242 ************************************ 00:10:28.242 17:11:14 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:10:28.242 17:11:14 -- spdk/autotest.sh@189 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:28.242 17:11:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:28.242 17:11:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.242 17:11:14 -- common/autotest_common.sh@10 -- # set +x 00:10:28.242 ************************************ 00:10:28.242 START TEST app_cmdline 00:10:28.242 ************************************ 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:10:28.242 * Looking for test storage... 00:10:28.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:28.242 17:11:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:28.242 17:11:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=63718 00:10:28.242 17:11:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 63718 00:10:28.242 17:11:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 63718 ']' 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.242 17:11:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:28.242 [2024-07-24 17:11:14.341046] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:28.242 [2024-07-24 17:11:14.341565] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63718 ] 00:10:28.501 [2024-07-24 17:11:14.478669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.501 [2024-07-24 17:11:14.612255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:10:29.436 { 00:10:29.436 "fields": { 00:10:29.436 "commit": "dca21ec0f", 00:10:29.436 "major": 24, 00:10:29.436 "minor": 9, 00:10:29.436 "patch": 0, 00:10:29.436 "suffix": "-pre" 00:10:29.436 }, 00:10:29.436 "version": "SPDK v24.09-pre git sha1 dca21ec0f" 00:10:29.436 } 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:29.436 17:11:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:10:29.436 17:11:15 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:29.437 17:11:15 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:29.695 2024/07/24 17:11:15 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:10:29.695 request: 00:10:29.695 { 00:10:29.695 "method": "env_dpdk_get_mem_stats", 00:10:29.695 "params": {} 00:10:29.695 } 00:10:29.695 Got JSON-RPC error response 00:10:29.695 GoRPCClient: error on JSON-RPC call 00:10:29.952 17:11:15 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:29.953 17:11:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 63718 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 63718 ']' 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 63718 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63718 00:10:29.953 killing process with pid 63718 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63718' 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@969 -- # kill 63718 00:10:29.953 17:11:15 app_cmdline -- common/autotest_common.sh@974 -- # wait 63718 00:10:30.211 00:10:30.211 real 0m2.174s 00:10:30.211 user 0m2.727s 00:10:30.211 sys 0m0.496s 00:10:30.211 17:11:16 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.211 ************************************ 00:10:30.211 END TEST app_cmdline 00:10:30.211 ************************************ 00:10:30.211 17:11:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:30.211 17:11:16 -- spdk/autotest.sh@190 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:30.211 17:11:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:30.211 17:11:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.211 17:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:30.211 ************************************ 00:10:30.211 START TEST version 00:10:30.211 ************************************ 00:10:30.211 17:11:16 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:10:30.470 * Looking for test storage... 00:10:30.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:10:30.470 17:11:16 version -- app/version.sh@17 -- # get_header_version major 00:10:30.470 17:11:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # cut -f2 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.470 17:11:16 version -- app/version.sh@17 -- # major=24 00:10:30.470 17:11:16 version -- app/version.sh@18 -- # get_header_version minor 00:10:30.470 17:11:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # cut -f2 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.470 17:11:16 version -- app/version.sh@18 -- # minor=9 00:10:30.470 17:11:16 version -- app/version.sh@19 -- # get_header_version patch 00:10:30.470 17:11:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # cut -f2 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.470 17:11:16 version -- app/version.sh@19 -- # patch=0 00:10:30.470 17:11:16 version -- app/version.sh@20 -- # get_header_version suffix 00:10:30.470 17:11:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # cut -f2 00:10:30.470 17:11:16 version -- app/version.sh@14 -- # tr -d '"' 00:10:30.470 17:11:16 version -- app/version.sh@20 -- # suffix=-pre 00:10:30.470 17:11:16 version -- app/version.sh@22 -- # version=24.9 00:10:30.470 17:11:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:30.470 17:11:16 version -- app/version.sh@28 -- # version=24.9rc0 00:10:30.470 17:11:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:10:30.470 17:11:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:30.470 17:11:16 version -- app/version.sh@30 -- # py_version=24.9rc0 00:10:30.470 17:11:16 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:10:30.470 00:10:30.470 real 0m0.151s 00:10:30.470 user 0m0.088s 00:10:30.470 sys 0m0.093s 00:10:30.470 17:11:16 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.470 ************************************ 00:10:30.470 END TEST version 00:10:30.470 ************************************ 00:10:30.470 17:11:16 version -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 17:11:16 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@202 -- # uname -s 00:10:30.470 17:11:16 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:10:30.470 17:11:16 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:10:30.470 17:11:16 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:10:30.470 17:11:16 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@264 -- # timing_exit lib 00:10:30.470 17:11:16 -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:30.470 17:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 17:11:16 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:10:30.470 17:11:16 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:10:30.470 17:11:16 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:30.470 17:11:16 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.470 17:11:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.470 17:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:30.470 ************************************ 00:10:30.470 START TEST nvmf_tcp 00:10:30.470 ************************************ 00:10:30.470 17:11:16 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:30.728 * Looking for test storage... 00:10:30.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:30.728 17:11:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:30.728 17:11:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:30.728 17:11:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:30.728 17:11:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.728 17:11:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.728 17:11:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:30.728 ************************************ 00:10:30.728 START TEST nvmf_target_core 00:10:30.728 ************************************ 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:30.728 * Looking for test storage... 00:10:30.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:30.728 17:11:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:30.729 17:11:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.729 17:11:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.729 ************************************ 00:10:30.729 START TEST nvmf_abort 00:10:30.729 ************************************ 00:10:30.729 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:30.729 * Looking for test storage... 00:10:30.729 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:30.729 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:30.729 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.987 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:30.988 17:11:16 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:30.988 Cannot find device "nvmf_init_br" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:30.988 Cannot find device "nvmf_tgt_br" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:30.988 Cannot find device "nvmf_tgt_br2" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:30.988 Cannot find device "nvmf_init_br" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:30.988 Cannot find device "nvmf_tgt_br" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:30.988 Cannot find device "nvmf_tgt_br2" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:30.988 Cannot find device "nvmf_br" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:30.988 Cannot find device "nvmf_init_if" 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@161 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:30.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:30.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:30.988 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:31.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:31.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:10:31.247 00:10:31.247 --- 10.0.0.2 ping statistics --- 00:10:31.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.247 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:31.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:31.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:10:31.247 00:10:31.247 --- 10.0.0.3 ping statistics --- 00:10:31.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.247 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:31.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:31.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:10:31.247 00:10:31.247 --- 10.0.0.1 ping statistics --- 00:10:31.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:31.247 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=64093 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 64093 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 64093 ']' 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:31.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:31.247 17:11:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:31.247 [2024-07-24 17:11:17.424471] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:31.247 [2024-07-24 17:11:17.424586] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.506 [2024-07-24 17:11:17.563977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:31.506 [2024-07-24 17:11:17.693346] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:31.506 [2024-07-24 17:11:17.693414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:31.506 [2024-07-24 17:11:17.693425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:31.506 [2024-07-24 17:11:17.693434] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:31.506 [2024-07-24 17:11:17.693441] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:31.506 [2024-07-24 17:11:17.693505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.506 [2024-07-24 17:11:17.694041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:31.506 [2024-07-24 17:11:17.694050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.440 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 [2024-07-24 17:11:18.497829] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 Malloc0 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 Delay0 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 [2024-07-24 17:11:18.568737] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.441 17:11:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:32.700 [2024-07-24 17:11:18.769323] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:34.602 Initializing NVMe Controllers 00:10:34.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:34.602 controller IO queue size 128 less than required 00:10:34.602 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:34.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:34.602 Initialization complete. Launching workers. 00:10:34.602 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 34189 00:10:34.602 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34250, failed to submit 62 00:10:34.602 success 34193, unsuccess 57, failed 0 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:34.602 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:34.860 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:34.860 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:34.860 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:34.860 17:11:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:34.860 rmmod nvme_tcp 00:10:34.860 rmmod nvme_fabrics 00:10:34.860 rmmod nvme_keyring 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 64093 ']' 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 64093 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 64093 ']' 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 64093 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64093 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:34.860 killing process with pid 64093 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64093' 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 64093 00:10:34.860 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 64093 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:35.119 00:10:35.119 real 0m4.471s 00:10:35.119 user 0m12.751s 00:10:35.119 sys 0m1.017s 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:35.119 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:35.447 ************************************ 00:10:35.447 END TEST nvmf_abort 00:10:35.447 ************************************ 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:35.447 ************************************ 00:10:35.447 START TEST nvmf_ns_hotplug_stress 00:10:35.447 ************************************ 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:35.447 * Looking for test storage... 00:10:35.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.447 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:35.448 Cannot find device "nvmf_tgt_br" 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:35.448 Cannot find device "nvmf_tgt_br2" 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:35.448 Cannot find device "nvmf_tgt_br" 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:35.448 Cannot find device "nvmf_tgt_br2" 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:35.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:35.448 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:35.448 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:35.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:35.714 00:10:35.714 --- 10.0.0.2 ping statistics --- 00:10:35.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.714 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:35.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:35.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:35.714 00:10:35.714 --- 10.0.0.3 ping statistics --- 00:10:35.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.714 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:35.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:10:35.714 00:10:35.714 --- 10.0.0.1 ping statistics --- 00:10:35.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.714 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=64360 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 64360 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 64360 ']' 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.714 17:11:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:35.714 [2024-07-24 17:11:21.926129] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:10:35.714 [2024-07-24 17:11:21.926239] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.973 [2024-07-24 17:11:22.069183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:35.973 [2024-07-24 17:11:22.203694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.973 [2024-07-24 17:11:22.203767] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.973 [2024-07-24 17:11:22.203782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.973 [2024-07-24 17:11:22.203792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.973 [2024-07-24 17:11:22.203801] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.973 [2024-07-24 17:11:22.203971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.973 [2024-07-24 17:11:22.204524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.973 [2024-07-24 17:11:22.204541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:36.909 17:11:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:37.168 [2024-07-24 17:11:23.188407] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.168 17:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:37.425 17:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.683 [2024-07-24 17:11:23.714624] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.683 17:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:37.941 17:11:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:38.263 Malloc0 00:10:38.263 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:38.263 Delay0 00:10:38.263 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.831 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:38.831 NULL1 00:10:38.831 17:11:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:39.090 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:39.090 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=64497 00:10:39.090 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:39.090 17:11:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.465 Read completed with error (sct=0, sc=11) 00:10:40.465 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.723 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:40.723 17:11:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:40.982 true 00:10:40.982 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:40.982 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.636 17:11:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.893 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:41.893 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:42.151 true 00:10:42.151 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:42.151 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.409 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.667 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:42.667 17:11:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:42.925 true 00:10:42.925 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:42.925 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.859 17:11:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.859 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:43.859 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:44.117 true 00:10:44.117 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:44.117 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.722 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.722 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:44.722 17:11:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:44.981 true 00:10:44.981 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:44.981 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.240 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.498 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:45.498 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:45.757 true 00:10:45.757 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:45.757 17:11:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.692 17:11:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.950 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:46.950 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:47.208 true 00:10:47.208 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:47.208 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.466 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.725 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:47.725 17:11:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:47.983 true 00:10:47.983 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:47.983 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.242 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.500 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:48.500 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:48.757 true 00:10:48.758 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:48.758 17:11:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.691 17:11:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.950 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:49.950 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:50.208 true 00:10:50.208 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:50.208 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.466 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.723 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:50.723 17:11:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:50.981 true 00:10:50.981 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:50.981 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.239 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.497 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:51.497 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:51.755 true 00:10:51.755 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:51.755 17:11:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.689 17:11:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.947 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:52.947 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:53.204 true 00:10:53.204 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:53.204 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.462 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.720 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:53.720 17:11:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:54.025 true 00:10:54.026 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:54.026 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.284 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.541 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:54.541 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:54.800 true 00:10:54.800 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:54.800 17:11:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.733 17:11:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.992 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:55.992 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:56.249 true 00:10:56.249 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:56.249 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.507 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.833 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:56.833 17:11:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:57.106 true 00:10:57.106 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:57.106 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.363 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.620 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:57.620 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:57.877 true 00:10:57.877 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:57.877 17:11:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.809 17:11:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:59.067 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:59.067 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:59.325 true 00:10:59.326 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:10:59.326 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.584 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.842 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:59.842 17:11:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:00.099 true 00:11:00.100 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:00.100 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.357 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.616 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:00.616 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:00.616 true 00:11:00.874 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:00.874 17:11:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.810 17:11:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:01.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.068 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:02.068 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:02.068 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:02.326 true 00:11:02.326 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:02.326 17:11:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.259 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.259 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:03.259 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:03.518 true 00:11:03.518 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:03.518 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.777 17:11:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.035 17:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:04.035 17:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:04.293 true 00:11:04.293 17:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:04.293 17:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.550 17:11:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.808 17:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:04.808 17:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:05.089 true 00:11:05.089 17:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:05.089 17:11:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.041 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.041 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:06.299 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:06.299 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:06.556 true 00:11:06.556 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:06.556 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.814 17:11:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.073 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:07.073 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:07.330 true 00:11:07.330 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:07.330 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.588 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.845 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:07.845 17:11:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:08.141 true 00:11:08.141 17:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:08.141 17:11:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.077 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.335 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:09.335 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:09.335 Initializing NVMe Controllers 00:11:09.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.335 Controller IO queue size 128, less than required. 00:11:09.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.335 Controller IO queue size 128, less than required. 00:11:09.335 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:09.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:09.335 Initialization complete. Launching workers. 00:11:09.335 ======================================================== 00:11:09.336 Latency(us) 00:11:09.336 Device Information : IOPS MiB/s Average min max 00:11:09.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 594.61 0.29 95979.99 3489.95 1096983.73 00:11:09.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9222.95 4.50 13877.82 3126.33 549508.63 00:11:09.336 ======================================================== 00:11:09.336 Total : 9817.56 4.79 18850.42 3126.33 1096983.73 00:11:09.336 00:11:09.594 true 00:11:09.594 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 64497 00:11:09.594 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (64497) - No such process 00:11:09.594 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 64497 00:11:09.594 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.852 17:11:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:10.110 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:10.110 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:10.110 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:10.110 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.110 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:10.368 null0 00:11:10.368 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.368 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.368 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:10.626 null1 00:11:10.626 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.626 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.626 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:10.885 null2 00:11:10.885 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.885 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.885 17:11:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:11.143 null3 00:11:11.143 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.143 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.143 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:11.401 null4 00:11:11.401 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.401 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.401 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:11.659 null5 00:11:11.659 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.659 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.659 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:11.917 null6 00:11:11.917 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.917 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.917 17:11:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:12.175 null7 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.175 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 65533 65534 65537 65539 65540 65542 65543 65547 00:11:12.176 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:12.435 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:12.693 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.694 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:12.952 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.952 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.952 17:11:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:12.952 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.952 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:12.952 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:12.952 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:12.952 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.211 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.469 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.727 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.728 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.986 17:11:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.986 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.244 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.503 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.763 17:12:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.022 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.281 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.539 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.539 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.539 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.540 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.799 17:12:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.799 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.799 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.799 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.057 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.315 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.573 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.831 17:12:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.831 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.831 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.831 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.831 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:17.090 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.348 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.606 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.867 17:12:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.867 rmmod nvme_tcp 00:11:17.867 rmmod nvme_fabrics 00:11:17.867 rmmod nvme_keyring 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 64360 ']' 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 64360 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 64360 ']' 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 64360 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64360 00:11:17.867 killing process with pid 64360 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64360' 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 64360 00:11:17.867 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 64360 00:11:18.125 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.125 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:18.125 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:18.126 00:11:18.126 real 0m42.940s 00:11:18.126 user 3m27.691s 00:11:18.126 sys 0m12.769s 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.126 ************************************ 00:11:18.126 END TEST nvmf_ns_hotplug_stress 00:11:18.126 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.126 ************************************ 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.385 ************************************ 00:11:18.385 START TEST nvmf_delete_subsystem 00:11:18.385 ************************************ 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:18.385 * Looking for test storage... 00:11:18.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:18.385 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:18.386 Cannot find device "nvmf_tgt_br" 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:18.386 Cannot find device "nvmf_tgt_br2" 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:18.386 Cannot find device "nvmf_tgt_br" 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:18.386 Cannot find device "nvmf_tgt_br2" 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:11:18.386 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.644 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.644 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:18.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:18.645 00:11:18.645 --- 10.0.0.2 ping statistics --- 00:11:18.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.645 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:18.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:11:18.645 00:11:18.645 --- 10.0.0.3 ping statistics --- 00:11:18.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.645 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:11:18.645 00:11:18.645 --- 10.0.0.1 ping statistics --- 00:11:18.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.645 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=66855 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 66855 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 66855 ']' 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:18.645 17:12:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:18.903 [2024-07-24 17:12:04.880196] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:18.903 [2024-07-24 17:12:04.880282] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.903 [2024-07-24 17:12:05.018189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:19.161 [2024-07-24 17:12:05.147240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.161 [2024-07-24 17:12:05.147295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.161 [2024-07-24 17:12:05.147309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.161 [2024-07-24 17:12:05.147319] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.161 [2024-07-24 17:12:05.147328] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.161 [2024-07-24 17:12:05.147465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.161 [2024-07-24 17:12:05.147946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.727 [2024-07-24 17:12:05.928328] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.727 [2024-07-24 17:12:05.944477] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.727 NULL1 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.727 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 Delay0 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=66906 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:19.986 17:12:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:19.986 [2024-07-24 17:12:06.149016] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:21.887 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.887 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.887 17:12:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 [2024-07-24 17:12:08.183054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135c910 is same with the state(5) to be set 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 starting I/O failed: -6 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 [2024-07-24 17:12:08.186617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5cdc00d470 is same with the state(5) to be set 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Write completed with error (sct=0, sc=8) 00:11:22.147 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Write completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:22.148 Read completed with error (sct=0, sc=8) 00:11:23.113 [2024-07-24 17:12:09.161860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133d510 is same with the state(5) to be set 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 [2024-07-24 17:12:09.184358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1360a80 is same with the state(5) to be set 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 [2024-07-24 17:12:09.184897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x135f6c0 is same with the state(5) to be set 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 [2024-07-24 17:12:09.186509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5cdc00d000 is same with the state(5) to be set 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Write completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 Read completed with error (sct=0, sc=8) 00:11:23.113 [2024-07-24 17:12:09.186746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5cdc00d7a0 is same with the state(5) to be set 00:11:23.113 Initializing NVMe Controllers 00:11:23.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:23.113 Controller IO queue size 128, less than required. 00:11:23.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:23.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:23.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:23.114 Initialization complete. Launching workers. 00:11:23.114 ======================================================== 00:11:23.114 Latency(us) 00:11:23.114 Device Information : IOPS MiB/s Average min max 00:11:23.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.49 0.08 908275.94 466.57 1010501.90 00:11:23.114 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.52 0.08 974866.77 323.43 2002060.06 00:11:23.114 ======================================================== 00:11:23.114 Total : 323.01 0.16 940956.67 323.43 2002060.06 00:11:23.114 00:11:23.114 [2024-07-24 17:12:09.187929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133d510 (9): Bad file descriptor 00:11:23.114 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:23.114 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.114 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:23.114 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66906 00:11:23.114 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 66906 00:11:23.680 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (66906) - No such process 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 66906 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 66906 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:11:23.680 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 66906 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.681 [2024-07-24 17:12:09.713340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=66951 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:23.681 17:12:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.681 [2024-07-24 17:12:09.893083] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:24.247 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.247 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:24.247 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.506 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.506 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:24.506 17:12:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.071 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.071 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:25.071 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:25.638 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:25.638 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:25.638 17:12:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.205 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.205 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:26.205 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.793 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:26.793 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:26.793 17:12:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:26.793 Initializing NVMe Controllers 00:11:26.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:26.793 Controller IO queue size 128, less than required. 00:11:26.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:26.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:26.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:26.793 Initialization complete. Launching workers. 00:11:26.793 ======================================================== 00:11:26.793 Latency(us) 00:11:26.793 Device Information : IOPS MiB/s Average min max 00:11:26.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003434.13 1000120.98 1042150.87 00:11:26.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005175.14 1000209.71 1013841.00 00:11:26.793 ======================================================== 00:11:26.793 Total : 256.00 0.12 1004304.64 1000120.98 1042150.87 00:11:26.793 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 66951 00:11:27.052 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (66951) - No such process 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 66951 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.052 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.312 rmmod nvme_tcp 00:11:27.312 rmmod nvme_fabrics 00:11:27.312 rmmod nvme_keyring 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 66855 ']' 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 66855 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 66855 ']' 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 66855 00:11:27.312 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66855 00:11:27.313 killing process with pid 66855 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66855' 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 66855 00:11:27.313 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 66855 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:27.572 00:11:27.572 real 0m9.269s 00:11:27.572 user 0m28.725s 00:11:27.572 sys 0m1.545s 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:27.572 ************************************ 00:11:27.572 END TEST nvmf_delete_subsystem 00:11:27.572 ************************************ 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:27.572 ************************************ 00:11:27.572 START TEST nvmf_host_management 00:11:27.572 ************************************ 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:27.572 * Looking for test storage... 00:11:27.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.572 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:27.831 Cannot find device "nvmf_tgt_br" 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.831 Cannot find device "nvmf_tgt_br2" 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:27.831 Cannot find device "nvmf_tgt_br" 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:27.831 Cannot find device "nvmf_tgt_br2" 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:27.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:27.831 17:12:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:27.831 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:27.831 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:27.831 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:27.831 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:27.831 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:27.832 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.090 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:28.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:11:28.090 00:11:28.090 --- 10.0.0.2 ping statistics --- 00:11:28.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.091 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:28.091 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.091 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:11:28.091 00:11:28.091 --- 10.0.0.3 ping statistics --- 00:11:28.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.091 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:11:28.091 00:11:28.091 --- 10.0.0.1 ping statistics --- 00:11:28.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.091 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=67194 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 67194 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67194 ']' 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.091 17:12:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:28.091 [2024-07-24 17:12:14.270612] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:28.091 [2024-07-24 17:12:14.270712] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.349 [2024-07-24 17:12:14.412157] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.349 [2024-07-24 17:12:14.534827] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.349 [2024-07-24 17:12:14.535154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.349 [2024-07-24 17:12:14.535294] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.349 [2024-07-24 17:12:14.535347] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.349 [2024-07-24 17:12:14.535443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.349 [2024-07-24 17:12:14.535640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.349 [2024-07-24 17:12:14.536154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.349 [2024-07-24 17:12:14.536255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:28.349 [2024-07-24 17:12:14.536263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 [2024-07-24 17:12:15.341592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 Malloc0 00:11:29.282 [2024-07-24 17:12:15.412529] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=67266 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 67266 /var/tmp/bdevperf.sock 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 67266 ']' 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:29.282 { 00:11:29.282 "params": { 00:11:29.282 "name": "Nvme$subsystem", 00:11:29.282 "trtype": "$TEST_TRANSPORT", 00:11:29.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.282 "adrfam": "ipv4", 00:11:29.282 "trsvcid": "$NVMF_PORT", 00:11:29.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.282 "hdgst": ${hdgst:-false}, 00:11:29.282 "ddgst": ${ddgst:-false} 00:11:29.282 }, 00:11:29.282 "method": "bdev_nvme_attach_controller" 00:11:29.282 } 00:11:29.282 EOF 00:11:29.282 )") 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:29.282 17:12:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:29.282 "params": { 00:11:29.282 "name": "Nvme0", 00:11:29.282 "trtype": "tcp", 00:11:29.282 "traddr": "10.0.0.2", 00:11:29.282 "adrfam": "ipv4", 00:11:29.282 "trsvcid": "4420", 00:11:29.282 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:29.282 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:29.282 "hdgst": false, 00:11:29.282 "ddgst": false 00:11:29.282 }, 00:11:29.282 "method": "bdev_nvme_attach_controller" 00:11:29.282 }' 00:11:29.540 [2024-07-24 17:12:15.537733] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:29.540 [2024-07-24 17:12:15.537870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67266 ] 00:11:29.540 [2024-07-24 17:12:15.677438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.826 [2024-07-24 17:12:15.807551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.826 Running I/O for 10 seconds... 00:11:30.397 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:30.397 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:11:30.397 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:30.397 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.397 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.657 [2024-07-24 17:12:16.705624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1818310 is same with the state(5) to be set 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:30.657 [2024-07-24 17:12:16.719290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.657 [2024-07-24 17:12:16.719336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.719352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.657 [2024-07-24 17:12:16.719362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.719373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.657 [2024-07-24 17:12:16.719383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.719393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:30.657 [2024-07-24 17:12:16.719403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.719413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x193caf0 is same with the state(5) to be set 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.657 17:12:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:30.657 task offset: 8192 on job bdev=Nvme0n1 fails 00:11:30.657 00:11:30.657 Latency(us) 00:11:30.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:30.657 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:30.657 Job: Nvme0n1 ended in about 0.73 seconds with error 00:11:30.657 Verification LBA range: start 0x0 length 0x400 00:11:30.657 Nvme0n1 : 0.73 1485.84 92.86 87.40 0.00 39711.30 1891.61 38130.04 00:11:30.657 =================================================================================================================== 00:11:30.657 Total : 1485.84 92.86 87.40 0.00 39711.30 1891.61 38130.04 00:11:30.657 [2024-07-24 17:12:16.726269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.657 [2024-07-24 17:12:16.726549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.657 [2024-07-24 17:12:16.726561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.726980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.726990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.658 [2024-07-24 17:12:16.727403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.658 [2024-07-24 17:12:16.727415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:30.659 [2024-07-24 17:12:16.727726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:30.659 [2024-07-24 17:12:16.727805] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x193c820 was disconnected and freed. reset controller. 00:11:30.659 [2024-07-24 17:12:16.728913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:30.659 [2024-07-24 17:12:16.730900] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:30.659 [2024-07-24 17:12:16.730924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193caf0 (9): Bad file descriptor 00:11:30.659 [2024-07-24 17:12:16.735043] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 67266 00:11:31.592 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (67266) - No such process 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:31.592 { 00:11:31.592 "params": { 00:11:31.592 "name": "Nvme$subsystem", 00:11:31.592 "trtype": "$TEST_TRANSPORT", 00:11:31.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:31.592 "adrfam": "ipv4", 00:11:31.592 "trsvcid": "$NVMF_PORT", 00:11:31.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:31.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:31.592 "hdgst": ${hdgst:-false}, 00:11:31.592 "ddgst": ${ddgst:-false} 00:11:31.592 }, 00:11:31.592 "method": "bdev_nvme_attach_controller" 00:11:31.592 } 00:11:31.592 EOF 00:11:31.592 )") 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:31.592 17:12:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:31.592 "params": { 00:11:31.592 "name": "Nvme0", 00:11:31.592 "trtype": "tcp", 00:11:31.592 "traddr": "10.0.0.2", 00:11:31.592 "adrfam": "ipv4", 00:11:31.592 "trsvcid": "4420", 00:11:31.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:31.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:31.592 "hdgst": false, 00:11:31.592 "ddgst": false 00:11:31.592 }, 00:11:31.592 "method": "bdev_nvme_attach_controller" 00:11:31.592 }' 00:11:31.592 [2024-07-24 17:12:17.773995] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:31.592 [2024-07-24 17:12:17.774084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67316 ] 00:11:31.850 [2024-07-24 17:12:17.906953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.850 [2024-07-24 17:12:18.013043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.107 Running I/O for 1 seconds... 00:11:33.041 00:11:33.042 Latency(us) 00:11:33.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:33.042 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:33.042 Verification LBA range: start 0x0 length 0x400 00:11:33.042 Nvme0n1 : 1.02 1569.67 98.10 0.00 0.00 39970.49 5719.51 36461.85 00:11:33.042 =================================================================================================================== 00:11:33.042 Total : 1569.67 98.10 0.00 0.00 39970.49 5719.51 36461.85 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:33.303 rmmod nvme_tcp 00:11:33.303 rmmod nvme_fabrics 00:11:33.303 rmmod nvme_keyring 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 67194 ']' 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 67194 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 67194 ']' 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 67194 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:11:33.303 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.304 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67194 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:33.572 killing process with pid 67194 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67194' 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 67194 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 67194 00:11:33.572 [2024-07-24 17:12:19.776151] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.572 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:33.832 00:11:33.832 real 0m6.131s 00:11:33.832 user 0m24.057s 00:11:33.832 sys 0m1.385s 00:11:33.832 ************************************ 00:11:33.832 END TEST nvmf_host_management 00:11:33.832 ************************************ 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.832 17:12:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.832 ************************************ 00:11:33.832 START TEST nvmf_lvol 00:11:33.832 ************************************ 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:33.833 * Looking for test storage... 00:11:33.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:33.833 17:12:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:33.833 Cannot find device "nvmf_tgt_br" 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:33.833 Cannot find device "nvmf_tgt_br2" 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:33.833 Cannot find device "nvmf_tgt_br" 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:11:33.833 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:34.092 Cannot find device "nvmf_tgt_br2" 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:34.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:34.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:34.092 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:34.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:11:34.093 00:11:34.093 --- 10.0.0.2 ping statistics --- 00:11:34.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.093 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:11:34.093 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:34.093 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:34.093 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:11:34.093 00:11:34.093 --- 10.0.0.3 ping statistics --- 00:11:34.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.093 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:34.093 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:34.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:34.351 00:11:34.352 --- 10.0.0.1 ping statistics --- 00:11:34.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.352 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=67532 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 67532 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 67532 ']' 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.352 17:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:34.352 [2024-07-24 17:12:20.406554] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:34.352 [2024-07-24 17:12:20.406643] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.352 [2024-07-24 17:12:20.550058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:34.611 [2024-07-24 17:12:20.680282] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.611 [2024-07-24 17:12:20.680574] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.611 [2024-07-24 17:12:20.680879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.611 [2024-07-24 17:12:20.681070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.611 [2024-07-24 17:12:20.681258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.611 [2024-07-24 17:12:20.681431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.611 [2024-07-24 17:12:20.681558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.611 [2024-07-24 17:12:20.681569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.176 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:35.176 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:11:35.176 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.176 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.176 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:35.434 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.434 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:35.692 [2024-07-24 17:12:21.694870] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.692 17:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:35.950 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:35.950 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:36.208 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:36.208 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:36.467 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:36.725 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=37546107-43a9-434c-a0fa-4372c8728305 00:11:36.725 17:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 37546107-43a9-434c-a0fa-4372c8728305 lvol 20 00:11:36.983 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=429ab84b-799b-48fc-a6fa-523d797ac623 00:11:36.983 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:37.242 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 429ab84b-799b-48fc-a6fa-523d797ac623 00:11:37.500 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:37.758 [2024-07-24 17:12:23.857908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.758 17:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:38.016 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:38.016 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=67673 00:11:38.016 17:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:38.951 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 429ab84b-799b-48fc-a6fa-523d797ac623 MY_SNAPSHOT 00:11:39.520 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=ec3ac5c8-ffb4-4587-a897-f86aa6dc9782 00:11:39.520 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 429ab84b-799b-48fc-a6fa-523d797ac623 30 00:11:39.520 17:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ec3ac5c8-ffb4-4587-a897-f86aa6dc9782 MY_CLONE 00:11:40.123 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=07e7fca3-6d01-46a6-b8d8-522930f2abaf 00:11:40.123 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 07e7fca3-6d01-46a6-b8d8-522930f2abaf 00:11:40.690 17:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 67673 00:11:48.829 Initializing NVMe Controllers 00:11:48.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:48.829 Controller IO queue size 128, less than required. 00:11:48.829 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:48.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:48.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:48.829 Initialization complete. Launching workers. 00:11:48.829 ======================================================== 00:11:48.829 Latency(us) 00:11:48.829 Device Information : IOPS MiB/s Average min max 00:11:48.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10447.90 40.81 12262.06 2320.04 132640.57 00:11:48.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10382.00 40.55 12340.37 2758.69 63724.73 00:11:48.829 ======================================================== 00:11:48.829 Total : 20829.90 81.37 12301.09 2320.04 132640.57 00:11:48.829 00:11:48.829 17:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:48.829 17:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 429ab84b-799b-48fc-a6fa-523d797ac623 00:11:48.829 17:12:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37546107-43a9-434c-a0fa-4372c8728305 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:49.088 rmmod nvme_tcp 00:11:49.088 rmmod nvme_fabrics 00:11:49.088 rmmod nvme_keyring 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 67532 ']' 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 67532 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 67532 ']' 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 67532 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.088 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67532 00:11:49.347 killing process with pid 67532 00:11:49.347 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.347 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.347 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67532' 00:11:49.347 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 67532 00:11:49.347 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 67532 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:49.606 ************************************ 00:11:49.606 END TEST nvmf_lvol 00:11:49.606 ************************************ 00:11:49.606 00:11:49.606 real 0m15.760s 00:11:49.606 user 1m5.824s 00:11:49.606 sys 0m3.890s 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:49.606 ************************************ 00:11:49.606 START TEST nvmf_lvs_grow 00:11:49.606 ************************************ 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:49.606 * Looking for test storage... 00:11:49.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.606 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:49.607 Cannot find device "nvmf_tgt_br" 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:49.607 Cannot find device "nvmf_tgt_br2" 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:49.607 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:49.865 Cannot find device "nvmf_tgt_br" 00:11:49.865 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:11:49.865 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:49.865 Cannot find device "nvmf_tgt_br2" 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:49.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:49.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:49.866 17:12:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:49.866 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:50.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:11:50.125 00:11:50.125 --- 10.0.0.2 ping statistics --- 00:11:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.125 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:50.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:50.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:11:50.125 00:11:50.125 --- 10.0.0.3 ping statistics --- 00:11:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.125 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:50.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:50.125 00:11:50.125 --- 10.0.0.1 ping statistics --- 00:11:50.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.125 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=68043 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 68043 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 68043 ']' 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:50.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:50.125 17:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:50.125 [2024-07-24 17:12:36.196714] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:50.125 [2024-07-24 17:12:36.196815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.125 [2024-07-24 17:12:36.338377] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.384 [2024-07-24 17:12:36.454679] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.384 [2024-07-24 17:12:36.454739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.384 [2024-07-24 17:12:36.454750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.384 [2024-07-24 17:12:36.454759] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.384 [2024-07-24 17:12:36.454766] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.384 [2024-07-24 17:12:36.454799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:51.319 [2024-07-24 17:12:37.485390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:51.319 ************************************ 00:11:51.319 START TEST lvs_grow_clean 00:11:51.319 ************************************ 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:51.319 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:51.886 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:51.886 17:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:51.886 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=70fd8593-0829-41c5-b082-bb18ee9e6f76 00:11:51.886 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:11:51.886 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:52.452 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:52.452 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:52.452 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 lvol 150 00:11:52.710 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f49f4d6-981c-424a-b210-edeb317e8f6d 00:11:52.710 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:52.710 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:52.710 [2024-07-24 17:12:38.941870] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:52.710 [2024-07-24 17:12:38.941949] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:52.968 true 00:11:52.968 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:11:52.968 17:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:53.226 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:53.226 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:53.485 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f49f4d6-981c-424a-b210-edeb317e8f6d 00:11:53.743 17:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:54.002 [2024-07-24 17:12:39.990536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.002 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68211 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68211 /var/tmp/bdevperf.sock 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 68211 ']' 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:54.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:54.269 17:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:54.269 [2024-07-24 17:12:40.309135] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:11:54.269 [2024-07-24 17:12:40.309251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68211 ] 00:11:54.269 [2024-07-24 17:12:40.447165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:54.552 [2024-07-24 17:12:40.589823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.117 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:55.117 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:11:55.117 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:55.684 Nvme0n1 00:11:55.684 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:55.684 [ 00:11:55.684 { 00:11:55.684 "aliases": [ 00:11:55.684 "5f49f4d6-981c-424a-b210-edeb317e8f6d" 00:11:55.684 ], 00:11:55.684 "assigned_rate_limits": { 00:11:55.684 "r_mbytes_per_sec": 0, 00:11:55.684 "rw_ios_per_sec": 0, 00:11:55.684 "rw_mbytes_per_sec": 0, 00:11:55.684 "w_mbytes_per_sec": 0 00:11:55.684 }, 00:11:55.684 "block_size": 4096, 00:11:55.684 "claimed": false, 00:11:55.684 "driver_specific": { 00:11:55.684 "mp_policy": "active_passive", 00:11:55.684 "nvme": [ 00:11:55.684 { 00:11:55.684 "ctrlr_data": { 00:11:55.684 "ana_reporting": false, 00:11:55.684 "cntlid": 1, 00:11:55.684 "firmware_revision": "24.09", 00:11:55.684 "model_number": "SPDK bdev Controller", 00:11:55.684 "multi_ctrlr": true, 00:11:55.684 "oacs": { 00:11:55.684 "firmware": 0, 00:11:55.684 "format": 0, 00:11:55.684 "ns_manage": 0, 00:11:55.684 "security": 0 00:11:55.684 }, 00:11:55.684 "serial_number": "SPDK0", 00:11:55.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:55.684 "vendor_id": "0x8086" 00:11:55.684 }, 00:11:55.684 "ns_data": { 00:11:55.684 "can_share": true, 00:11:55.684 "id": 1 00:11:55.684 }, 00:11:55.684 "trid": { 00:11:55.684 "adrfam": "IPv4", 00:11:55.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:55.684 "traddr": "10.0.0.2", 00:11:55.684 "trsvcid": "4420", 00:11:55.684 "trtype": "TCP" 00:11:55.684 }, 00:11:55.684 "vs": { 00:11:55.684 "nvme_version": "1.3" 00:11:55.684 } 00:11:55.684 } 00:11:55.684 ] 00:11:55.684 }, 00:11:55.684 "memory_domains": [ 00:11:55.684 { 00:11:55.684 "dma_device_id": "system", 00:11:55.684 "dma_device_type": 1 00:11:55.684 } 00:11:55.684 ], 00:11:55.684 "name": "Nvme0n1", 00:11:55.684 "num_blocks": 38912, 00:11:55.684 "product_name": "NVMe disk", 00:11:55.684 "supported_io_types": { 00:11:55.684 "abort": true, 00:11:55.684 "compare": true, 00:11:55.684 "compare_and_write": true, 00:11:55.684 "copy": true, 00:11:55.684 "flush": true, 00:11:55.684 "get_zone_info": false, 00:11:55.684 "nvme_admin": true, 00:11:55.684 "nvme_io": true, 00:11:55.684 "nvme_io_md": false, 00:11:55.684 "nvme_iov_md": false, 00:11:55.684 "read": true, 00:11:55.684 "reset": true, 00:11:55.684 "seek_data": false, 00:11:55.684 "seek_hole": false, 00:11:55.684 "unmap": true, 00:11:55.684 "write": true, 00:11:55.684 "write_zeroes": true, 00:11:55.684 "zcopy": false, 00:11:55.684 "zone_append": false, 00:11:55.684 "zone_management": false 00:11:55.684 }, 00:11:55.684 "uuid": "5f49f4d6-981c-424a-b210-edeb317e8f6d", 00:11:55.684 "zoned": false 00:11:55.684 } 00:11:55.684 ] 00:11:55.684 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:55.684 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68259 00:11:55.684 17:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:55.944 Running I/O for 10 seconds... 00:11:56.877 Latency(us) 00:11:56.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:56.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:56.877 Nvme0n1 : 1.00 8342.00 32.59 0.00 0.00 0.00 0.00 0.00 00:11:56.877 =================================================================================================================== 00:11:56.877 Total : 8342.00 32.59 0.00 0.00 0.00 0.00 0.00 00:11:56.877 00:11:57.850 17:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:11:57.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:57.850 Nvme0n1 : 2.00 8360.00 32.66 0.00 0.00 0.00 0.00 0.00 00:11:57.850 =================================================================================================================== 00:11:57.850 Total : 8360.00 32.66 0.00 0.00 0.00 0.00 0.00 00:11:57.850 00:11:58.108 true 00:11:58.108 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:58.108 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:11:58.364 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:58.364 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:58.364 17:12:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 68259 00:11:58.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:58.928 Nvme0n1 : 3.00 8281.00 32.35 0.00 0.00 0.00 0.00 0.00 00:11:58.928 =================================================================================================================== 00:11:58.928 Total : 8281.00 32.35 0.00 0.00 0.00 0.00 0.00 00:11:58.928 00:11:59.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:59.858 Nvme0n1 : 4.00 8237.75 32.18 0.00 0.00 0.00 0.00 0.00 00:11:59.858 =================================================================================================================== 00:11:59.858 Total : 8237.75 32.18 0.00 0.00 0.00 0.00 0.00 00:11:59.858 00:12:00.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:00.791 Nvme0n1 : 5.00 8237.20 32.18 0.00 0.00 0.00 0.00 0.00 00:12:00.791 =================================================================================================================== 00:12:00.791 Total : 8237.20 32.18 0.00 0.00 0.00 0.00 0.00 00:12:00.791 00:12:02.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.164 Nvme0n1 : 6.00 8206.83 32.06 0.00 0.00 0.00 0.00 0.00 00:12:02.164 =================================================================================================================== 00:12:02.164 Total : 8206.83 32.06 0.00 0.00 0.00 0.00 0.00 00:12:02.164 00:12:03.098 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.098 Nvme0n1 : 7.00 8184.14 31.97 0.00 0.00 0.00 0.00 0.00 00:12:03.098 =================================================================================================================== 00:12:03.098 Total : 8184.14 31.97 0.00 0.00 0.00 0.00 0.00 00:12:03.098 00:12:04.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.032 Nvme0n1 : 8.00 8157.25 31.86 0.00 0.00 0.00 0.00 0.00 00:12:04.032 =================================================================================================================== 00:12:04.032 Total : 8157.25 31.86 0.00 0.00 0.00 0.00 0.00 00:12:04.032 00:12:04.966 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.966 Nvme0n1 : 9.00 8125.11 31.74 0.00 0.00 0.00 0.00 0.00 00:12:04.966 =================================================================================================================== 00:12:04.966 Total : 8125.11 31.74 0.00 0.00 0.00 0.00 0.00 00:12:04.966 00:12:05.901 00:12:05.901 Latency(us) 00:12:05.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.901 Nvme0n1 : 10.00 8078.97 31.56 0.00 0.00 15839.62 7387.69 35746.91 00:12:05.901 =================================================================================================================== 00:12:05.901 Total : 8078.97 31.56 0.00 0.00 15839.62 7387.69 35746.91 00:12:05.901 0 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68211 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 68211 ']' 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 68211 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68211 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:05.901 killing process with pid 68211 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68211' 00:12:05.901 Received shutdown signal, test time was about 10.000000 seconds 00:12:05.901 00:12:05.901 Latency(us) 00:12:05.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.901 =================================================================================================================== 00:12:05.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 68211 00:12:05.901 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 68211 00:12:06.159 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:06.417 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:06.674 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:06.674 17:12:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:06.932 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:06.932 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:06.932 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:07.190 [2024-07-24 17:12:53.365699] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:07.190 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:07.447 2024/07/24 17:12:53 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:70fd8593-0829-41c5-b082-bb18ee9e6f76], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:12:07.447 request: 00:12:07.447 { 00:12:07.447 "method": "bdev_lvol_get_lvstores", 00:12:07.447 "params": { 00:12:07.447 "uuid": "70fd8593-0829-41c5-b082-bb18ee9e6f76" 00:12:07.447 } 00:12:07.447 } 00:12:07.447 Got JSON-RPC error response 00:12:07.447 GoRPCClient: error on JSON-RPC call 00:12:07.704 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:12:07.704 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.704 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.704 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.705 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:07.705 aio_bdev 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5f49f4d6-981c-424a-b210-edeb317e8f6d 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=5f49f4d6-981c-424a-b210-edeb317e8f6d 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:07.962 17:12:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:08.220 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5f49f4d6-981c-424a-b210-edeb317e8f6d -t 2000 00:12:08.478 [ 00:12:08.478 { 00:12:08.478 "aliases": [ 00:12:08.478 "lvs/lvol" 00:12:08.478 ], 00:12:08.478 "assigned_rate_limits": { 00:12:08.478 "r_mbytes_per_sec": 0, 00:12:08.478 "rw_ios_per_sec": 0, 00:12:08.478 "rw_mbytes_per_sec": 0, 00:12:08.478 "w_mbytes_per_sec": 0 00:12:08.478 }, 00:12:08.478 "block_size": 4096, 00:12:08.478 "claimed": false, 00:12:08.478 "driver_specific": { 00:12:08.478 "lvol": { 00:12:08.478 "base_bdev": "aio_bdev", 00:12:08.478 "clone": false, 00:12:08.478 "esnap_clone": false, 00:12:08.478 "lvol_store_uuid": "70fd8593-0829-41c5-b082-bb18ee9e6f76", 00:12:08.478 "num_allocated_clusters": 38, 00:12:08.478 "snapshot": false, 00:12:08.478 "thin_provision": false 00:12:08.478 } 00:12:08.478 }, 00:12:08.478 "name": "5f49f4d6-981c-424a-b210-edeb317e8f6d", 00:12:08.478 "num_blocks": 38912, 00:12:08.478 "product_name": "Logical Volume", 00:12:08.478 "supported_io_types": { 00:12:08.478 "abort": false, 00:12:08.478 "compare": false, 00:12:08.478 "compare_and_write": false, 00:12:08.478 "copy": false, 00:12:08.478 "flush": false, 00:12:08.478 "get_zone_info": false, 00:12:08.478 "nvme_admin": false, 00:12:08.478 "nvme_io": false, 00:12:08.478 "nvme_io_md": false, 00:12:08.478 "nvme_iov_md": false, 00:12:08.478 "read": true, 00:12:08.478 "reset": true, 00:12:08.478 "seek_data": true, 00:12:08.478 "seek_hole": true, 00:12:08.478 "unmap": true, 00:12:08.478 "write": true, 00:12:08.478 "write_zeroes": true, 00:12:08.478 "zcopy": false, 00:12:08.478 "zone_append": false, 00:12:08.478 "zone_management": false 00:12:08.478 }, 00:12:08.478 "uuid": "5f49f4d6-981c-424a-b210-edeb317e8f6d", 00:12:08.478 "zoned": false 00:12:08.478 } 00:12:08.478 ] 00:12:08.478 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:12:08.478 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:08.478 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:08.478 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:08.736 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:08.736 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:08.994 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:08.994 17:12:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5f49f4d6-981c-424a-b210-edeb317e8f6d 00:12:08.995 17:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70fd8593-0829-41c5-b082-bb18ee9e6f76 00:12:09.561 17:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:09.819 17:12:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:10.077 ************************************ 00:12:10.077 END TEST lvs_grow_clean 00:12:10.077 ************************************ 00:12:10.077 00:12:10.077 real 0m18.667s 00:12:10.077 user 0m17.916s 00:12:10.077 sys 0m2.227s 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:10.077 ************************************ 00:12:10.077 START TEST lvs_grow_dirty 00:12:10.077 ************************************ 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:10.077 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:10.335 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:10.335 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:10.901 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:10.901 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:10.901 17:12:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:10.901 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:10.901 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:10.901 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e lvol 150 00:12:11.159 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:11.159 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:11.159 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:11.417 [2024-07-24 17:12:57.638035] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:11.417 [2024-07-24 17:12:57.638170] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:11.417 true 00:12:11.675 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:11.675 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:11.675 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:11.675 17:12:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:11.934 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:12.192 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:12.451 [2024-07-24 17:12:58.578576] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.451 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=68659 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 68659 /var/tmp/bdevperf.sock 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68659 ']' 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:12.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:12.710 17:12:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.710 [2024-07-24 17:12:58.879744] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:12.710 [2024-07-24 17:12:58.879837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68659 ] 00:12:12.969 [2024-07-24 17:12:59.017707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.969 [2024-07-24 17:12:59.149164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.963 17:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:13.963 17:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:13.963 17:12:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:13.963 Nvme0n1 00:12:13.963 17:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:14.222 [ 00:12:14.222 { 00:12:14.222 "aliases": [ 00:12:14.222 "89e7d45b-e43a-4cba-a282-f4b4144d0421" 00:12:14.222 ], 00:12:14.222 "assigned_rate_limits": { 00:12:14.222 "r_mbytes_per_sec": 0, 00:12:14.222 "rw_ios_per_sec": 0, 00:12:14.222 "rw_mbytes_per_sec": 0, 00:12:14.222 "w_mbytes_per_sec": 0 00:12:14.222 }, 00:12:14.222 "block_size": 4096, 00:12:14.222 "claimed": false, 00:12:14.222 "driver_specific": { 00:12:14.222 "mp_policy": "active_passive", 00:12:14.222 "nvme": [ 00:12:14.222 { 00:12:14.222 "ctrlr_data": { 00:12:14.222 "ana_reporting": false, 00:12:14.222 "cntlid": 1, 00:12:14.222 "firmware_revision": "24.09", 00:12:14.222 "model_number": "SPDK bdev Controller", 00:12:14.222 "multi_ctrlr": true, 00:12:14.222 "oacs": { 00:12:14.222 "firmware": 0, 00:12:14.222 "format": 0, 00:12:14.222 "ns_manage": 0, 00:12:14.222 "security": 0 00:12:14.222 }, 00:12:14.222 "serial_number": "SPDK0", 00:12:14.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.222 "vendor_id": "0x8086" 00:12:14.222 }, 00:12:14.222 "ns_data": { 00:12:14.222 "can_share": true, 00:12:14.222 "id": 1 00:12:14.222 }, 00:12:14.222 "trid": { 00:12:14.222 "adrfam": "IPv4", 00:12:14.222 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:14.222 "traddr": "10.0.0.2", 00:12:14.222 "trsvcid": "4420", 00:12:14.222 "trtype": "TCP" 00:12:14.222 }, 00:12:14.222 "vs": { 00:12:14.222 "nvme_version": "1.3" 00:12:14.222 } 00:12:14.222 } 00:12:14.222 ] 00:12:14.222 }, 00:12:14.222 "memory_domains": [ 00:12:14.222 { 00:12:14.222 "dma_device_id": "system", 00:12:14.222 "dma_device_type": 1 00:12:14.222 } 00:12:14.222 ], 00:12:14.222 "name": "Nvme0n1", 00:12:14.222 "num_blocks": 38912, 00:12:14.222 "product_name": "NVMe disk", 00:12:14.222 "supported_io_types": { 00:12:14.222 "abort": true, 00:12:14.222 "compare": true, 00:12:14.222 "compare_and_write": true, 00:12:14.222 "copy": true, 00:12:14.222 "flush": true, 00:12:14.222 "get_zone_info": false, 00:12:14.222 "nvme_admin": true, 00:12:14.222 "nvme_io": true, 00:12:14.222 "nvme_io_md": false, 00:12:14.222 "nvme_iov_md": false, 00:12:14.222 "read": true, 00:12:14.222 "reset": true, 00:12:14.222 "seek_data": false, 00:12:14.222 "seek_hole": false, 00:12:14.222 "unmap": true, 00:12:14.222 "write": true, 00:12:14.222 "write_zeroes": true, 00:12:14.222 "zcopy": false, 00:12:14.222 "zone_append": false, 00:12:14.222 "zone_management": false 00:12:14.222 }, 00:12:14.222 "uuid": "89e7d45b-e43a-4cba-a282-f4b4144d0421", 00:12:14.222 "zoned": false 00:12:14.222 } 00:12:14.222 ] 00:12:14.222 17:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=68712 00:12:14.222 17:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:14.222 17:13:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:14.481 Running I/O for 10 seconds... 00:12:15.414 Latency(us) 00:12:15.414 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:15.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:15.414 Nvme0n1 : 1.00 8403.00 32.82 0.00 0.00 0.00 0.00 0.00 00:12:15.414 =================================================================================================================== 00:12:15.414 Total : 8403.00 32.82 0.00 0.00 0.00 0.00 0.00 00:12:15.414 00:12:16.348 17:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:16.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:16.348 Nvme0n1 : 2.00 8356.00 32.64 0.00 0.00 0.00 0.00 0.00 00:12:16.349 =================================================================================================================== 00:12:16.349 Total : 8356.00 32.64 0.00 0.00 0.00 0.00 0.00 00:12:16.349 00:12:16.647 true 00:12:16.647 17:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:16.647 17:13:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:16.905 17:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:16.905 17:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:16.905 17:13:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 68712 00:12:17.471 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:17.471 Nvme0n1 : 3.00 8381.00 32.74 0.00 0.00 0.00 0.00 0.00 00:12:17.471 =================================================================================================================== 00:12:17.471 Total : 8381.00 32.74 0.00 0.00 0.00 0.00 0.00 00:12:17.471 00:12:18.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:18.406 Nvme0n1 : 4.00 8368.75 32.69 0.00 0.00 0.00 0.00 0.00 00:12:18.406 =================================================================================================================== 00:12:18.406 Total : 8368.75 32.69 0.00 0.00 0.00 0.00 0.00 00:12:18.406 00:12:19.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:19.445 Nvme0n1 : 5.00 8331.60 32.55 0.00 0.00 0.00 0.00 0.00 00:12:19.445 =================================================================================================================== 00:12:19.445 Total : 8331.60 32.55 0.00 0.00 0.00 0.00 0.00 00:12:19.445 00:12:20.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:20.380 Nvme0n1 : 6.00 8289.00 32.38 0.00 0.00 0.00 0.00 0.00 00:12:20.380 =================================================================================================================== 00:12:20.380 Total : 8289.00 32.38 0.00 0.00 0.00 0.00 0.00 00:12:20.380 00:12:21.316 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:21.316 Nvme0n1 : 7.00 8180.57 31.96 0.00 0.00 0.00 0.00 0.00 00:12:21.316 =================================================================================================================== 00:12:21.316 Total : 8180.57 31.96 0.00 0.00 0.00 0.00 0.00 00:12:21.316 00:12:22.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:22.689 Nvme0n1 : 8.00 7941.00 31.02 0.00 0.00 0.00 0.00 0.00 00:12:22.689 =================================================================================================================== 00:12:22.689 Total : 7941.00 31.02 0.00 0.00 0.00 0.00 0.00 00:12:22.689 00:12:23.256 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:23.256 Nvme0n1 : 9.00 7887.44 30.81 0.00 0.00 0.00 0.00 0.00 00:12:23.256 =================================================================================================================== 00:12:23.256 Total : 7887.44 30.81 0.00 0.00 0.00 0.00 0.00 00:12:23.256 00:12:24.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.634 Nvme0n1 : 10.00 7888.30 30.81 0.00 0.00 0.00 0.00 0.00 00:12:24.634 =================================================================================================================== 00:12:24.634 Total : 7888.30 30.81 0.00 0.00 0.00 0.00 0.00 00:12:24.634 00:12:24.634 00:12:24.634 Latency(us) 00:12:24.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:24.634 Nvme0n1 : 10.02 7890.99 30.82 0.00 0.00 16209.26 7238.75 66250.94 00:12:24.634 =================================================================================================================== 00:12:24.634 Total : 7890.99 30.82 0.00 0.00 16209.26 7238.75 66250.94 00:12:24.634 0 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 68659 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 68659 ']' 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 68659 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68659 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:24.634 killing process with pid 68659 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68659' 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 68659 00:12:24.634 Received shutdown signal, test time was about 10.000000 seconds 00:12:24.634 00:12:24.634 Latency(us) 00:12:24.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.634 =================================================================================================================== 00:12:24.634 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 68659 00:12:24.634 17:13:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:24.892 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:25.152 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:25.152 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 68043 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 68043 00:12:25.411 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 68043 Killed "${NVMF_APP[@]}" "$@" 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=68876 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 68876 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 68876 ']' 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:25.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:25.411 17:13:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:25.669 [2024-07-24 17:13:11.653593] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:25.669 [2024-07-24 17:13:11.653720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.669 [2024-07-24 17:13:11.797019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:25.927 [2024-07-24 17:13:11.916782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.927 [2024-07-24 17:13:11.916853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.927 [2024-07-24 17:13:11.916881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.927 [2024-07-24 17:13:11.916889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.927 [2024-07-24 17:13:11.916896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.927 [2024-07-24 17:13:11.916926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.493 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:26.494 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:12:26.494 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.494 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:26.494 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:26.494 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.494 17:13:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:26.751 [2024-07-24 17:13:12.961637] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:26.751 [2024-07-24 17:13:12.961906] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:26.751 [2024-07-24 17:13:12.962152] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:27.010 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:27.269 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89e7d45b-e43a-4cba-a282-f4b4144d0421 -t 2000 00:12:27.527 [ 00:12:27.527 { 00:12:27.527 "aliases": [ 00:12:27.527 "lvs/lvol" 00:12:27.527 ], 00:12:27.527 "assigned_rate_limits": { 00:12:27.527 "r_mbytes_per_sec": 0, 00:12:27.527 "rw_ios_per_sec": 0, 00:12:27.527 "rw_mbytes_per_sec": 0, 00:12:27.527 "w_mbytes_per_sec": 0 00:12:27.527 }, 00:12:27.527 "block_size": 4096, 00:12:27.527 "claimed": false, 00:12:27.527 "driver_specific": { 00:12:27.527 "lvol": { 00:12:27.527 "base_bdev": "aio_bdev", 00:12:27.527 "clone": false, 00:12:27.527 "esnap_clone": false, 00:12:27.527 "lvol_store_uuid": "4fd22bf7-374b-41ce-8ce0-6bca9876d25e", 00:12:27.527 "num_allocated_clusters": 38, 00:12:27.527 "snapshot": false, 00:12:27.527 "thin_provision": false 00:12:27.527 } 00:12:27.527 }, 00:12:27.527 "name": "89e7d45b-e43a-4cba-a282-f4b4144d0421", 00:12:27.527 "num_blocks": 38912, 00:12:27.527 "product_name": "Logical Volume", 00:12:27.527 "supported_io_types": { 00:12:27.527 "abort": false, 00:12:27.527 "compare": false, 00:12:27.527 "compare_and_write": false, 00:12:27.527 "copy": false, 00:12:27.527 "flush": false, 00:12:27.527 "get_zone_info": false, 00:12:27.527 "nvme_admin": false, 00:12:27.527 "nvme_io": false, 00:12:27.527 "nvme_io_md": false, 00:12:27.527 "nvme_iov_md": false, 00:12:27.527 "read": true, 00:12:27.527 "reset": true, 00:12:27.527 "seek_data": true, 00:12:27.527 "seek_hole": true, 00:12:27.527 "unmap": true, 00:12:27.527 "write": true, 00:12:27.527 "write_zeroes": true, 00:12:27.527 "zcopy": false, 00:12:27.527 "zone_append": false, 00:12:27.527 "zone_management": false 00:12:27.527 }, 00:12:27.527 "uuid": "89e7d45b-e43a-4cba-a282-f4b4144d0421", 00:12:27.527 "zoned": false 00:12:27.527 } 00:12:27.527 ] 00:12:27.527 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:27.527 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:27.527 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:27.785 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:27.785 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:27.785 17:13:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:28.043 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:28.043 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:28.301 [2024-07-24 17:13:14.350948] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:28.301 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:28.559 2024/07/24 17:13:14 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:4fd22bf7-374b-41ce-8ce0-6bca9876d25e], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:12:28.559 request: 00:12:28.559 { 00:12:28.559 "method": "bdev_lvol_get_lvstores", 00:12:28.559 "params": { 00:12:28.559 "uuid": "4fd22bf7-374b-41ce-8ce0-6bca9876d25e" 00:12:28.559 } 00:12:28.559 } 00:12:28.559 Got JSON-RPC error response 00:12:28.559 GoRPCClient: error on JSON-RPC call 00:12:28.559 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:12:28.559 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:28.559 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:28.559 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:28.559 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:28.817 aio_bdev 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:28.817 17:13:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:29.075 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89e7d45b-e43a-4cba-a282-f4b4144d0421 -t 2000 00:12:29.333 [ 00:12:29.333 { 00:12:29.333 "aliases": [ 00:12:29.333 "lvs/lvol" 00:12:29.333 ], 00:12:29.333 "assigned_rate_limits": { 00:12:29.333 "r_mbytes_per_sec": 0, 00:12:29.333 "rw_ios_per_sec": 0, 00:12:29.333 "rw_mbytes_per_sec": 0, 00:12:29.333 "w_mbytes_per_sec": 0 00:12:29.333 }, 00:12:29.333 "block_size": 4096, 00:12:29.333 "claimed": false, 00:12:29.333 "driver_specific": { 00:12:29.333 "lvol": { 00:12:29.333 "base_bdev": "aio_bdev", 00:12:29.333 "clone": false, 00:12:29.333 "esnap_clone": false, 00:12:29.333 "lvol_store_uuid": "4fd22bf7-374b-41ce-8ce0-6bca9876d25e", 00:12:29.333 "num_allocated_clusters": 38, 00:12:29.333 "snapshot": false, 00:12:29.333 "thin_provision": false 00:12:29.333 } 00:12:29.333 }, 00:12:29.333 "name": "89e7d45b-e43a-4cba-a282-f4b4144d0421", 00:12:29.333 "num_blocks": 38912, 00:12:29.333 "product_name": "Logical Volume", 00:12:29.333 "supported_io_types": { 00:12:29.333 "abort": false, 00:12:29.333 "compare": false, 00:12:29.333 "compare_and_write": false, 00:12:29.333 "copy": false, 00:12:29.333 "flush": false, 00:12:29.334 "get_zone_info": false, 00:12:29.334 "nvme_admin": false, 00:12:29.334 "nvme_io": false, 00:12:29.334 "nvme_io_md": false, 00:12:29.334 "nvme_iov_md": false, 00:12:29.334 "read": true, 00:12:29.334 "reset": true, 00:12:29.334 "seek_data": true, 00:12:29.334 "seek_hole": true, 00:12:29.334 "unmap": true, 00:12:29.334 "write": true, 00:12:29.334 "write_zeroes": true, 00:12:29.334 "zcopy": false, 00:12:29.334 "zone_append": false, 00:12:29.334 "zone_management": false 00:12:29.334 }, 00:12:29.334 "uuid": "89e7d45b-e43a-4cba-a282-f4b4144d0421", 00:12:29.334 "zoned": false 00:12:29.334 } 00:12:29.334 ] 00:12:29.334 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:12:29.334 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:29.334 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:29.624 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:29.624 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:29.624 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:29.882 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:29.882 17:13:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 89e7d45b-e43a-4cba-a282-f4b4144d0421 00:12:30.140 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4fd22bf7-374b-41ce-8ce0-6bca9876d25e 00:12:30.397 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:30.655 17:13:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:12:30.914 ************************************ 00:12:30.914 END TEST lvs_grow_dirty 00:12:30.914 ************************************ 00:12:30.914 00:12:30.914 real 0m20.761s 00:12:30.914 user 0m43.340s 00:12:30.914 sys 0m8.203s 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:30.914 nvmf_trace.0 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.914 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:31.173 rmmod nvme_tcp 00:12:31.173 rmmod nvme_fabrics 00:12:31.173 rmmod nvme_keyring 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 68876 ']' 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 68876 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 68876 ']' 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 68876 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68876 00:12:31.173 killing process with pid 68876 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68876' 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 68876 00:12:31.173 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 68876 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:31.432 00:12:31.432 real 0m41.968s 00:12:31.432 user 1m7.849s 00:12:31.432 sys 0m11.203s 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:31.432 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:31.432 ************************************ 00:12:31.432 END TEST nvmf_lvs_grow 00:12:31.432 ************************************ 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:31.691 ************************************ 00:12:31.691 START TEST nvmf_bdev_io_wait 00:12:31.691 ************************************ 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:31.691 * Looking for test storage... 00:12:31.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.691 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:31.692 Cannot find device "nvmf_tgt_br" 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:12:31.692 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:31.692 Cannot find device "nvmf_tgt_br2" 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:31.693 Cannot find device "nvmf_tgt_br" 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:31.693 Cannot find device "nvmf_tgt_br2" 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:12:31.693 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:31.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:31.951 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:31.951 17:13:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:31.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:12:31.951 00:12:31.951 --- 10.0.0.2 ping statistics --- 00:12:31.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.951 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:31.951 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:31.951 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:12:31.951 00:12:31.951 --- 10.0.0.3 ping statistics --- 00:12:31.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.951 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:31.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:31.951 00:12:31.951 --- 10.0.0.1 ping statistics --- 00:12:31.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.951 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.951 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=69290 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 69290 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 69290 ']' 00:12:32.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:32.243 17:13:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:32.243 [2024-07-24 17:13:18.250054] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:32.243 [2024-07-24 17:13:18.250166] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.243 [2024-07-24 17:13:18.391477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.500 [2024-07-24 17:13:18.521682] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.500 [2024-07-24 17:13:18.521940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.500 [2024-07-24 17:13:18.522162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.500 [2024-07-24 17:13:18.522323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.500 [2024-07-24 17:13:18.522385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.500 [2024-07-24 17:13:18.522807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.500 [2024-07-24 17:13:18.522888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.500 [2024-07-24 17:13:18.522965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.500 [2024-07-24 17:13:18.522970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.065 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:33.065 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:12:33.065 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.065 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:33.065 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 [2024-07-24 17:13:19.395601] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 Malloc0 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:33.324 [2024-07-24 17:13:19.465019] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=69349 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=69351 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.324 { 00:12:33.324 "params": { 00:12:33.324 "name": "Nvme$subsystem", 00:12:33.324 "trtype": "$TEST_TRANSPORT", 00:12:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.324 "adrfam": "ipv4", 00:12:33.324 "trsvcid": "$NVMF_PORT", 00:12:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.324 "hdgst": ${hdgst:-false}, 00:12:33.324 "ddgst": ${ddgst:-false} 00:12:33.324 }, 00:12:33.324 "method": "bdev_nvme_attach_controller" 00:12:33.324 } 00:12:33.324 EOF 00:12:33.324 )") 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=69353 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.324 { 00:12:33.324 "params": { 00:12:33.324 "name": "Nvme$subsystem", 00:12:33.324 "trtype": "$TEST_TRANSPORT", 00:12:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.324 "adrfam": "ipv4", 00:12:33.324 "trsvcid": "$NVMF_PORT", 00:12:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.324 "hdgst": ${hdgst:-false}, 00:12:33.324 "ddgst": ${ddgst:-false} 00:12:33.324 }, 00:12:33.324 "method": "bdev_nvme_attach_controller" 00:12:33.324 } 00:12:33.324 EOF 00:12:33.324 )") 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=69357 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.324 { 00:12:33.324 "params": { 00:12:33.324 "name": "Nvme$subsystem", 00:12:33.324 "trtype": "$TEST_TRANSPORT", 00:12:33.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.324 "adrfam": "ipv4", 00:12:33.324 "trsvcid": "$NVMF_PORT", 00:12:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.324 "hdgst": ${hdgst:-false}, 00:12:33.324 "ddgst": ${ddgst:-false} 00:12:33.324 }, 00:12:33.324 "method": "bdev_nvme_attach_controller" 00:12:33.324 } 00:12:33.324 EOF 00:12:33.324 )") 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.324 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.324 "params": { 00:12:33.324 "name": "Nvme1", 00:12:33.324 "trtype": "tcp", 00:12:33.324 "traddr": "10.0.0.2", 00:12:33.324 "adrfam": "ipv4", 00:12:33.324 "trsvcid": "4420", 00:12:33.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.325 "hdgst": false, 00:12:33.325 "ddgst": false 00:12:33.325 }, 00:12:33.325 "method": "bdev_nvme_attach_controller" 00:12:33.325 }' 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.325 "params": { 00:12:33.325 "name": "Nvme1", 00:12:33.325 "trtype": "tcp", 00:12:33.325 "traddr": "10.0.0.2", 00:12:33.325 "adrfam": "ipv4", 00:12:33.325 "trsvcid": "4420", 00:12:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.325 "hdgst": false, 00:12:33.325 "ddgst": false 00:12:33.325 }, 00:12:33.325 "method": "bdev_nvme_attach_controller" 00:12:33.325 }' 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:33.325 { 00:12:33.325 "params": { 00:12:33.325 "name": "Nvme$subsystem", 00:12:33.325 "trtype": "$TEST_TRANSPORT", 00:12:33.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:33.325 "adrfam": "ipv4", 00:12:33.325 "trsvcid": "$NVMF_PORT", 00:12:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:33.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:33.325 "hdgst": ${hdgst:-false}, 00:12:33.325 "ddgst": ${ddgst:-false} 00:12:33.325 }, 00:12:33.325 "method": "bdev_nvme_attach_controller" 00:12:33.325 } 00:12:33.325 EOF 00:12:33.325 )") 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.325 "params": { 00:12:33.325 "name": "Nvme1", 00:12:33.325 "trtype": "tcp", 00:12:33.325 "traddr": "10.0.0.2", 00:12:33.325 "adrfam": "ipv4", 00:12:33.325 "trsvcid": "4420", 00:12:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.325 "hdgst": false, 00:12:33.325 "ddgst": false 00:12:33.325 }, 00:12:33.325 "method": "bdev_nvme_attach_controller" 00:12:33.325 }' 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:33.325 "params": { 00:12:33.325 "name": "Nvme1", 00:12:33.325 "trtype": "tcp", 00:12:33.325 "traddr": "10.0.0.2", 00:12:33.325 "adrfam": "ipv4", 00:12:33.325 "trsvcid": "4420", 00:12:33.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:33.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:33.325 "hdgst": false, 00:12:33.325 "ddgst": false 00:12:33.325 }, 00:12:33.325 "method": "bdev_nvme_attach_controller" 00:12:33.325 }' 00:12:33.325 [2024-07-24 17:13:19.527173] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:33.325 [2024-07-24 17:13:19.527532] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:33.325 17:13:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 69349 00:12:33.325 [2024-07-24 17:13:19.548130] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:33.325 [2024-07-24 17:13:19.548392] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:33.584 [2024-07-24 17:13:19.557926] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:33.584 [2024-07-24 17:13:19.558253] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:33.584 [2024-07-24 17:13:19.558436] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:33.584 [2024-07-24 17:13:19.558513] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:33.584 [2024-07-24 17:13:19.740430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.842 [2024-07-24 17:13:19.823245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.842 [2024-07-24 17:13:19.842822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:12:33.842 [2024-07-24 17:13:19.890872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.842 [2024-07-24 17:13:19.924846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:33.842 [2024-07-24 17:13:19.987996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.842 Running I/O for 1 seconds... 00:12:33.842 [2024-07-24 17:13:20.018339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:33.842 Running I/O for 1 seconds... 00:12:34.099 [2024-07-24 17:13:20.078547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:34.099 Running I/O for 1 seconds... 00:12:34.099 Running I/O for 1 seconds... 00:12:35.031 00:12:35.031 Latency(us) 00:12:35.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.031 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:35.031 Nvme1n1 : 1.02 7458.22 29.13 0.00 0.00 16914.44 6255.71 34078.72 00:12:35.031 =================================================================================================================== 00:12:35.031 Total : 7458.22 29.13 0.00 0.00 16914.44 6255.71 34078.72 00:12:35.031 00:12:35.031 Latency(us) 00:12:35.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.031 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:35.031 Nvme1n1 : 1.00 195017.99 761.79 0.00 0.00 653.56 292.31 778.24 00:12:35.031 =================================================================================================================== 00:12:35.031 Total : 195017.99 761.79 0.00 0.00 653.56 292.31 778.24 00:12:35.031 00:12:35.031 Latency(us) 00:12:35.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.031 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:35.031 Nvme1n1 : 1.01 8004.30 31.27 0.00 0.00 15892.70 10604.92 26333.56 00:12:35.031 =================================================================================================================== 00:12:35.031 Total : 8004.30 31.27 0.00 0.00 15892.70 10604.92 26333.56 00:12:35.031 00:12:35.031 Latency(us) 00:12:35.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.031 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:35.031 Nvme1n1 : 1.01 6953.50 27.16 0.00 0.00 18347.21 5808.87 42657.98 00:12:35.031 =================================================================================================================== 00:12:35.031 Total : 6953.50 27.16 0.00 0.00 18347.21 5808.87 42657.98 00:12:35.303 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 69351 00:12:35.303 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 69353 00:12:35.303 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 69357 00:12:35.303 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.303 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.303 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:35.594 rmmod nvme_tcp 00:12:35.594 rmmod nvme_fabrics 00:12:35.594 rmmod nvme_keyring 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 69290 ']' 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 69290 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 69290 ']' 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 69290 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69290 00:12:35.594 killing process with pid 69290 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69290' 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 69290 00:12:35.594 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 69290 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:35.853 ************************************ 00:12:35.853 END TEST nvmf_bdev_io_wait 00:12:35.853 ************************************ 00:12:35.853 00:12:35.853 real 0m4.191s 00:12:35.853 user 0m18.610s 00:12:35.853 sys 0m1.924s 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:35.853 ************************************ 00:12:35.853 START TEST nvmf_queue_depth 00:12:35.853 ************************************ 00:12:35.853 17:13:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:35.853 * Looking for test storage... 00:12:35.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.853 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:35.854 Cannot find device "nvmf_tgt_br" 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:12:35.854 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.111 Cannot find device "nvmf_tgt_br2" 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:36.112 Cannot find device "nvmf_tgt_br" 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:36.112 Cannot find device "nvmf_tgt_br2" 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:36.112 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:36.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:12:36.370 00:12:36.370 --- 10.0.0.2 ping statistics --- 00:12:36.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.370 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:36.370 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:36.370 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:12:36.370 00:12:36.370 --- 10.0.0.3 ping statistics --- 00:12:36.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.370 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:36.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:36.370 00:12:36.370 --- 10.0.0.1 ping statistics --- 00:12:36.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.370 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=69589 00:12:36.370 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 69589 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69589 ']' 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:36.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:36.371 17:13:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:36.371 [2024-07-24 17:13:22.442622] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:36.371 [2024-07-24 17:13:22.442752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.371 [2024-07-24 17:13:22.586278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.628 [2024-07-24 17:13:22.713742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.628 [2024-07-24 17:13:22.713806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.628 [2024-07-24 17:13:22.713834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.628 [2024-07-24 17:13:22.713844] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.628 [2024-07-24 17:13:22.713853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.628 [2024-07-24 17:13:22.713892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 [2024-07-24 17:13:23.528555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 Malloc0 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.562 [2024-07-24 17:13:23.593427] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=69639 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:37.562 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 69639 /var/tmp/bdevperf.sock 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 69639 ']' 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:37.563 17:13:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:37.563 [2024-07-24 17:13:23.650795] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:37.563 [2024-07-24 17:13:23.651225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69639 ] 00:12:37.821 [2024-07-24 17:13:23.796058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.821 [2024-07-24 17:13:23.923466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:38.759 NVMe0n1 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.759 17:13:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:38.759 Running I/O for 10 seconds... 00:12:50.961 00:12:50.961 Latency(us) 00:12:50.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.961 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:50.961 Verification LBA range: start 0x0 length 0x4000 00:12:50.961 NVMe0n1 : 10.09 9200.11 35.94 0.00 0.00 110785.08 25856.93 110577.11 00:12:50.961 =================================================================================================================== 00:12:50.961 Total : 9200.11 35.94 0.00 0.00 110785.08 25856.93 110577.11 00:12:50.961 0 00:12:50.961 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 69639 00:12:50.961 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69639 ']' 00:12:50.961 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69639 00:12:50.961 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:50.961 17:13:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69639 00:12:50.961 killing process with pid 69639 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69639' 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69639 00:12:50.961 Received shutdown signal, test time was about 10.000000 seconds 00:12:50.961 00:12:50.961 Latency(us) 00:12:50.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.961 =================================================================================================================== 00:12:50.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69639 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.961 rmmod nvme_tcp 00:12:50.961 rmmod nvme_fabrics 00:12:50.961 rmmod nvme_keyring 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.961 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 69589 ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 69589 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 69589 ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 69589 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69589 00:12:50.962 killing process with pid 69589 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69589' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 69589 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 69589 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:50.962 ************************************ 00:12:50.962 END TEST nvmf_queue_depth 00:12:50.962 ************************************ 00:12:50.962 00:12:50.962 real 0m13.735s 00:12:50.962 user 0m23.904s 00:12:50.962 sys 0m2.027s 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:50.962 ************************************ 00:12:50.962 START TEST nvmf_target_multipath 00:12:50.962 ************************************ 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:50.962 * Looking for test storage... 00:12:50.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:50.962 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:50.963 Cannot find device "nvmf_tgt_br" 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:50.963 Cannot find device "nvmf_tgt_br2" 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:50.963 Cannot find device "nvmf_tgt_br" 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:50.963 Cannot find device "nvmf_tgt_br2" 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:50.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:50.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:50.963 17:13:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:50.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:50.963 00:12:50.963 --- 10.0.0.2 ping statistics --- 00:12:50.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.963 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:50.963 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:50.963 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:12:50.963 00:12:50.963 --- 10.0.0.3 ping statistics --- 00:12:50.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.963 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:50.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:12:50.963 00:12:50.963 --- 10.0.0.1 ping statistics --- 00:12:50.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.963 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=69972 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 69972 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 69972 ']' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:50.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:50.963 17:13:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:50.963 [2024-07-24 17:13:36.266147] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:12:50.963 [2024-07-24 17:13:36.266270] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.963 [2024-07-24 17:13:36.401438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.963 [2024-07-24 17:13:36.524087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.964 [2024-07-24 17:13:36.524389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.964 [2024-07-24 17:13:36.524517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.964 [2024-07-24 17:13:36.524642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.964 [2024-07-24 17:13:36.524681] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.964 [2024-07-24 17:13:36.524932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.964 [2024-07-24 17:13:36.525057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.964 [2024-07-24 17:13:36.525136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.964 [2024-07-24 17:13:36.525136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:51.237 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:51.504 [2024-07-24 17:13:37.495796] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.504 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:12:51.761 Malloc0 00:12:51.761 17:13:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:12:52.019 17:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.278 17:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.536 [2024-07-24 17:13:38.521638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.536 17:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:52.536 [2024-07-24 17:13:38.753867] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:52.795 17:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:12:52.795 17:13:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:12:53.054 17:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.054 17:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.054 17:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.054 17:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.054 17:13:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=70115 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:12:55.586 17:13:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:12:55.586 [global] 00:12:55.586 thread=1 00:12:55.586 invalidate=1 00:12:55.586 rw=randrw 00:12:55.586 time_based=1 00:12:55.586 runtime=6 00:12:55.586 ioengine=libaio 00:12:55.586 direct=1 00:12:55.586 bs=4096 00:12:55.586 iodepth=128 00:12:55.586 norandommap=0 00:12:55.586 numjobs=1 00:12:55.586 00:12:55.586 verify_dump=1 00:12:55.586 verify_backlog=512 00:12:55.586 verify_state_save=0 00:12:55.586 do_verify=1 00:12:55.586 verify=crc32c-intel 00:12:55.586 [job0] 00:12:55.586 filename=/dev/nvme0n1 00:12:55.586 Could not set queue depth (nvme0n1) 00:12:55.586 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:55.586 fio-3.35 00:12:55.586 Starting 1 thread 00:12:56.153 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:12:56.411 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:56.669 17:13:42 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:57.603 17:13:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:57.603 17:13:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:57.604 17:13:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:57.604 17:13:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:58.232 17:13:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:12:59.605 17:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:12:59.605 17:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:12:59.605 17:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:12:59.605 17:13:45 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 70115 00:13:01.507 00:13:01.507 job0: (groupid=0, jobs=1): err= 0: pid=70136: Wed Jul 24 17:13:47 2024 00:13:01.507 read: IOPS=10.8k, BW=42.3MiB/s (44.4MB/s)(254MiB/6006msec) 00:13:01.507 slat (usec): min=3, max=5364, avg=52.74, stdev=236.96 00:13:01.507 clat (usec): min=2027, max=16881, avg=7968.28, stdev=1174.66 00:13:01.507 lat (usec): min=2036, max=17570, avg=8021.01, stdev=1184.65 00:13:01.507 clat percentiles (usec): 00:13:01.507 | 1.00th=[ 4883], 5.00th=[ 6259], 10.00th=[ 6849], 20.00th=[ 7242], 00:13:01.507 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8094], 00:13:01.507 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[ 9765], 00:13:01.507 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13042], 99.95th=[13304], 00:13:01.507 | 99.99th=[14484] 00:13:01.507 bw ( KiB/s): min= 6475, max=30288, per=54.46%, avg=23587.18, stdev=7006.55, samples=11 00:13:01.507 iops : min= 1618, max= 7572, avg=5896.91, stdev=1751.61, samples=11 00:13:01.507 write: IOPS=6646, BW=26.0MiB/s (27.2MB/s)(138MiB/5333msec); 0 zone resets 00:13:01.507 slat (usec): min=14, max=1882, avg=63.43, stdev=165.80 00:13:01.507 clat (usec): min=2594, max=13159, avg=6870.97, stdev=900.26 00:13:01.507 lat (usec): min=2617, max=13185, avg=6934.40, stdev=902.62 00:13:01.507 clat percentiles (usec): 00:13:01.507 | 1.00th=[ 4015], 5.00th=[ 5407], 10.00th=[ 5932], 20.00th=[ 6325], 00:13:01.507 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7111], 00:13:01.507 | 70.00th=[ 7242], 80.00th=[ 7504], 90.00th=[ 7767], 95.00th=[ 7963], 00:13:01.507 | 99.00th=[ 9765], 99.50th=[10421], 99.90th=[12125], 99.95th=[12387], 00:13:01.507 | 99.99th=[12780] 00:13:01.507 bw ( KiB/s): min= 6666, max=30184, per=88.77%, avg=23601.64, stdev=6732.99, samples=11 00:13:01.507 iops : min= 1666, max= 7546, avg=5900.36, stdev=1683.37, samples=11 00:13:01.507 lat (msec) : 4=0.49%, 10=96.37%, 20=3.14% 00:13:01.507 cpu : usr=5.51%, sys=22.33%, ctx=6267, majf=0, minf=145 00:13:01.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:01.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:01.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:01.507 issued rwts: total=65033,35445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:01.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:01.507 00:13:01.507 Run status group 0 (all jobs): 00:13:01.507 READ: bw=42.3MiB/s (44.4MB/s), 42.3MiB/s-42.3MiB/s (44.4MB/s-44.4MB/s), io=254MiB (266MB), run=6006-6006msec 00:13:01.507 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=138MiB (145MB), run=5333-5333msec 00:13:01.507 00:13:01.507 Disk stats (read/write): 00:13:01.507 nvme0n1: ios=63962/34924, merge=0/0, ticks=478060/225367, in_queue=703427, util=98.60% 00:13:01.507 17:13:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:13:01.766 17:13:47 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:13:02.024 17:13:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=70269 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:13:02.960 17:13:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:13:02.960 [global] 00:13:02.960 thread=1 00:13:02.960 invalidate=1 00:13:02.960 rw=randrw 00:13:02.960 time_based=1 00:13:02.960 runtime=6 00:13:02.960 ioengine=libaio 00:13:02.960 direct=1 00:13:02.960 bs=4096 00:13:02.960 iodepth=128 00:13:02.960 norandommap=0 00:13:02.960 numjobs=1 00:13:02.960 00:13:02.960 verify_dump=1 00:13:02.960 verify_backlog=512 00:13:02.960 verify_state_save=0 00:13:02.960 do_verify=1 00:13:02.960 verify=crc32c-intel 00:13:02.960 [job0] 00:13:02.960 filename=/dev/nvme0n1 00:13:02.960 Could not set queue depth (nvme0n1) 00:13:03.218 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:03.218 fio-3.35 00:13:03.218 Starting 1 thread 00:13:04.153 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:13:04.153 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:04.411 17:13:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:13:05.802 17:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:13:05.803 17:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:05.803 17:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:05.803 17:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:13:05.803 17:13:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:06.060 17:13:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:13:06.995 17:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:13:06.995 17:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:13:06.995 17:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:13:06.995 17:13:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 70269 00:13:09.571 00:13:09.571 job0: (groupid=0, jobs=1): err= 0: pid=70290: Wed Jul 24 17:13:55 2024 00:13:09.571 read: IOPS=10.3k, BW=40.2MiB/s (42.1MB/s)(241MiB/6006msec) 00:13:09.571 slat (usec): min=2, max=5250, avg=50.51, stdev=246.23 00:13:09.571 clat (usec): min=690, max=19097, avg=8512.66, stdev=2025.49 00:13:09.571 lat (usec): min=702, max=19107, avg=8563.17, stdev=2034.96 00:13:09.571 clat percentiles (usec): 00:13:09.571 | 1.00th=[ 2769], 5.00th=[ 4883], 10.00th=[ 6128], 20.00th=[ 7373], 00:13:09.571 | 30.00th=[ 7767], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:13:09.571 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11994], 00:13:09.571 | 99.00th=[13829], 99.50th=[14615], 99.90th=[16188], 99.95th=[17171], 00:13:09.571 | 99.99th=[18482] 00:13:09.571 bw ( KiB/s): min=11120, max=30096, per=52.84%, avg=21747.18, stdev=5664.75, samples=11 00:13:09.571 iops : min= 2780, max= 7524, avg=5436.73, stdev=1416.18, samples=11 00:13:09.571 write: IOPS=6153, BW=24.0MiB/s (25.2MB/s)(128MiB/5338msec); 0 zone resets 00:13:09.571 slat (usec): min=4, max=5402, avg=56.80, stdev=156.74 00:13:09.571 clat (usec): min=1336, max=15215, avg=7198.19, stdev=1759.67 00:13:09.571 lat (usec): min=1374, max=15239, avg=7254.99, stdev=1766.47 00:13:09.571 clat percentiles (usec): 00:13:09.571 | 1.00th=[ 2606], 5.00th=[ 3490], 10.00th=[ 4293], 20.00th=[ 6325], 00:13:09.571 | 30.00th=[ 6915], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7767], 00:13:09.571 | 70.00th=[ 8029], 80.00th=[ 8356], 90.00th=[ 8848], 95.00th=[ 9503], 00:13:09.571 | 99.00th=[11469], 99.50th=[12256], 99.90th=[13829], 99.95th=[14222], 00:13:09.571 | 99.99th=[14615] 00:13:09.571 bw ( KiB/s): min=11784, max=31408, per=88.59%, avg=21806.18, stdev=5565.30, samples=11 00:13:09.571 iops : min= 2946, max= 7852, avg=5451.55, stdev=1391.32, samples=11 00:13:09.571 lat (usec) : 750=0.01%, 1000=0.01% 00:13:09.571 lat (msec) : 2=0.26%, 4=4.52%, 10=82.43%, 20=12.78% 00:13:09.571 cpu : usr=5.16%, sys=21.13%, ctx=6233, majf=0, minf=114 00:13:09.571 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:13:09.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:09.571 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:09.571 issued rwts: total=61798,32848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:09.571 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:09.571 00:13:09.571 Run status group 0 (all jobs): 00:13:09.571 READ: bw=40.2MiB/s (42.1MB/s), 40.2MiB/s-40.2MiB/s (42.1MB/s-42.1MB/s), io=241MiB (253MB), run=6006-6006msec 00:13:09.571 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=128MiB (135MB), run=5338-5338msec 00:13:09.571 00:13:09.571 Disk stats (read/write): 00:13:09.571 nvme0n1: ios=60961/32300, merge=0/0, ticks=489763/218262, in_queue=708025, util=98.68% 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:09.571 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.830 rmmod nvme_tcp 00:13:09.830 rmmod nvme_fabrics 00:13:09.830 rmmod nvme_keyring 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 69972 ']' 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 69972 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 69972 ']' 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 69972 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69972 00:13:09.830 killing process with pid 69972 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69972' 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 69972 00:13:09.830 17:13:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 69972 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:10.088 00:13:10.088 real 0m20.480s 00:13:10.088 user 1m20.072s 00:13:10.088 sys 0m6.339s 00:13:10.088 ************************************ 00:13:10.088 END TEST nvmf_target_multipath 00:13:10.088 ************************************ 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:10.088 ************************************ 00:13:10.088 START TEST nvmf_zcopy 00:13:10.088 ************************************ 00:13:10.088 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:10.348 * Looking for test storage... 00:13:10.348 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.348 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:10.349 Cannot find device "nvmf_tgt_br" 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:10.349 Cannot find device "nvmf_tgt_br2" 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:10.349 Cannot find device "nvmf_tgt_br" 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:10.349 Cannot find device "nvmf_tgt_br2" 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:10.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:10.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:10.349 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:10.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:10.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:13:10.608 00:13:10.608 --- 10.0.0.2 ping statistics --- 00:13:10.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.608 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:10.608 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:10.608 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:10.609 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:13:10.609 00:13:10.609 --- 10.0.0.3 ping statistics --- 00:13:10.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.609 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:10.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:10.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:13:10.609 00:13:10.609 --- 10.0.0.1 ping statistics --- 00:13:10.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:10.609 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:10.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=70569 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 70569 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 70569 ']' 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.609 17:13:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:10.609 [2024-07-24 17:13:56.816843] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:10.609 [2024-07-24 17:13:56.816946] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.867 [2024-07-24 17:13:56.958949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.867 [2024-07-24 17:13:57.088337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.867 [2024-07-24 17:13:57.088397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.867 [2024-07-24 17:13:57.088412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.867 [2024-07-24 17:13:57.088423] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.867 [2024-07-24 17:13:57.088432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.867 [2024-07-24 17:13:57.088472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 [2024-07-24 17:13:57.841633] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 [2024-07-24 17:13:57.857778] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 malloc0 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:11.807 { 00:13:11.807 "params": { 00:13:11.807 "name": "Nvme$subsystem", 00:13:11.807 "trtype": "$TEST_TRANSPORT", 00:13:11.807 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.807 "adrfam": "ipv4", 00:13:11.807 "trsvcid": "$NVMF_PORT", 00:13:11.807 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.807 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.807 "hdgst": ${hdgst:-false}, 00:13:11.807 "ddgst": ${ddgst:-false} 00:13:11.807 }, 00:13:11.807 "method": "bdev_nvme_attach_controller" 00:13:11.807 } 00:13:11.807 EOF 00:13:11.807 )") 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:11.807 17:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:11.807 "params": { 00:13:11.807 "name": "Nvme1", 00:13:11.807 "trtype": "tcp", 00:13:11.807 "traddr": "10.0.0.2", 00:13:11.807 "adrfam": "ipv4", 00:13:11.807 "trsvcid": "4420", 00:13:11.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.807 "hdgst": false, 00:13:11.807 "ddgst": false 00:13:11.807 }, 00:13:11.807 "method": "bdev_nvme_attach_controller" 00:13:11.807 }' 00:13:11.807 [2024-07-24 17:13:57.953345] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:11.807 [2024-07-24 17:13:57.953440] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70620 ] 00:13:12.068 [2024-07-24 17:13:58.092610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.068 [2024-07-24 17:13:58.223817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.327 Running I/O for 10 seconds... 00:13:22.342 00:13:22.342 Latency(us) 00:13:22.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:22.342 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:22.342 Verification LBA range: start 0x0 length 0x1000 00:13:22.342 Nvme1n1 : 10.01 6167.41 48.18 0.00 0.00 20687.66 1050.07 30027.40 00:13:22.342 =================================================================================================================== 00:13:22.342 Total : 6167.41 48.18 0.00 0.00 20687.66 1050.07 30027.40 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=70742 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:22.605 { 00:13:22.605 "params": { 00:13:22.605 "name": "Nvme$subsystem", 00:13:22.605 "trtype": "$TEST_TRANSPORT", 00:13:22.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:22.605 "adrfam": "ipv4", 00:13:22.605 "trsvcid": "$NVMF_PORT", 00:13:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:22.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:22.605 "hdgst": ${hdgst:-false}, 00:13:22.605 "ddgst": ${ddgst:-false} 00:13:22.605 }, 00:13:22.605 "method": "bdev_nvme_attach_controller" 00:13:22.605 } 00:13:22.605 EOF 00:13:22.605 )") 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:22.605 [2024-07-24 17:14:08.658064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.605 [2024-07-24 17:14:08.658129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:22.605 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.605 17:14:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:22.605 "params": { 00:13:22.605 "name": "Nvme1", 00:13:22.605 "trtype": "tcp", 00:13:22.605 "traddr": "10.0.0.2", 00:13:22.605 "adrfam": "ipv4", 00:13:22.605 "trsvcid": "4420", 00:13:22.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:22.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:22.605 "hdgst": false, 00:13:22.605 "ddgst": false 00:13:22.605 }, 00:13:22.605 "method": "bdev_nvme_attach_controller" 00:13:22.605 }' 00:13:22.605 [2024-07-24 17:14:08.670031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.605 [2024-07-24 17:14:08.670058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.605 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.605 [2024-07-24 17:14:08.682030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.605 [2024-07-24 17:14:08.682056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.694039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.694065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.706053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.706083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 [2024-07-24 17:14:08.709050] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:22.606 [2024-07-24 17:14:08.709178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70742 ] 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.718039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.718066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.730038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.730062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.742037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.742061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.754039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.754063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.766038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.766066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.778075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.778113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.790084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.790123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.802083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.802139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.814078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.814132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.826058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.826082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.606 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.606 [2024-07-24 17:14:08.838066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.606 [2024-07-24 17:14:08.838092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.846135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.869 [2024-07-24 17:14:08.850092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.850133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.862112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.862146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.874075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.874126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.882081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.882114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.890076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.890112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.898075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.898111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.910155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.910191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.922159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.922213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.934153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.934188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.946136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.946162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.958158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.958187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.965830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.869 [2024-07-24 17:14:08.970141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.970169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.982164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.982193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.869 [2024-07-24 17:14:08.994177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.869 [2024-07-24 17:14:08.994211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.869 2024/07/24 17:14:08 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.006172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.006227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.018180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.018214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.030139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.030168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.042202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.042235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.054157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.054184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.066138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.066162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.078206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.078238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:22.870 [2024-07-24 17:14:09.090194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:22.870 [2024-07-24 17:14:09.090223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:22.870 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.102167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.102195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.114175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.114207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.126175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.126204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.138208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.138240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 Running I/O for 5 seconds... 00:13:23.133 [2024-07-24 17:14:09.150179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.150207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.166664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.166704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.181842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.181898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.192047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.192112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.206875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.206926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.221800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.221849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.236998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.237047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.252242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.252291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.268100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.268179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.285782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.285831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.300712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.300780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.316460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.316563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.133 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.133 [2024-07-24 17:14:09.333125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.133 [2024-07-24 17:14:09.333190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.134 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.134 [2024-07-24 17:14:09.349967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.134 [2024-07-24 17:14:09.350017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.134 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.395 [2024-07-24 17:14:09.365948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.395 [2024-07-24 17:14:09.365983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.395 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.395 [2024-07-24 17:14:09.383197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.395 [2024-07-24 17:14:09.383245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.395 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.395 [2024-07-24 17:14:09.398884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.398950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.415497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.415546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.434238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.434277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.449259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.449293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.467171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.467232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.481786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.481836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.496656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.496707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.512040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.512090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.522140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.522180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.536822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.536880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.553972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.554007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.568948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.568996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.584051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.584147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.594750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.594801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.609319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.609366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.396 [2024-07-24 17:14:09.621292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.396 [2024-07-24 17:14:09.621324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.396 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.638717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.638767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.653541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.653594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.670311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.670393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.685683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.685718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.702945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.702995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.718307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.718348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.728413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.728446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.743262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.743296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.759088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.759149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.768982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.769031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.784640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.784691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.801573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.801622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.817752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.817801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.834445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.834481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.850941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.850991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.868273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.868321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.656 [2024-07-24 17:14:09.883755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.656 [2024-07-24 17:14:09.883800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.656 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.915 [2024-07-24 17:14:09.899577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.915 [2024-07-24 17:14:09.899628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.915 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.915 [2024-07-24 17:14:09.909801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.915 [2024-07-24 17:14:09.909849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.915 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:09.924203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:09.924250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:09.940529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:09.940579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:09.955772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:09.955821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:09.970540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:09.970573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:09.987385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:09.987444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:09 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.002469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.002507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.018717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.018771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.035307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.035354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.053099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.053160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.068298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.068350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.084133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.084177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.094125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.094184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.108384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.108416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.123943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.123984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:23.916 [2024-07-24 17:14:10.135046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:23.916 [2024-07-24 17:14:10.135107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:23.916 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.149839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.149889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.166912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.166967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.183322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.183351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.199398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.199466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.216888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.216940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.232126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.232202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.247699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.247734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.263252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.263302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.279970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.280028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.295998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.296047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.312239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.312271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.321830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.321877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.337408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.337442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.354896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.354954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.370763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.370817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.387495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.387564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.175 [2024-07-24 17:14:10.403060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.175 [2024-07-24 17:14:10.403135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.175 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.413757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.413804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.429401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.429435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.444193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.444274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.460072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.460119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.477642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.477678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.494370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.494406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.510626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.510660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.526970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.527020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.434 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.434 [2024-07-24 17:14:10.544910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.434 [2024-07-24 17:14:10.544959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.560369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.560419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.576947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.576998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.593438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.593487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.609504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.609553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.620709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.620761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.638047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.638080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.653522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.653570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.435 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.435 [2024-07-24 17:14:10.664999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.435 [2024-07-24 17:14:10.665030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.682297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.682367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.698489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.698522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.708582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.708615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.723294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.723358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.733839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.733871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.748717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.748765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.767758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.767792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.781754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.781801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.797697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.797730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.814637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.814671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.831758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.831807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.694 [2024-07-24 17:14:10.846201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.694 [2024-07-24 17:14:10.846233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.694 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.695 [2024-07-24 17:14:10.861981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.695 [2024-07-24 17:14:10.862032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.695 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.695 [2024-07-24 17:14:10.881025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.695 [2024-07-24 17:14:10.881107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.695 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.695 [2024-07-24 17:14:10.895209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.695 [2024-07-24 17:14:10.895242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.695 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.695 [2024-07-24 17:14:10.913341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.695 [2024-07-24 17:14:10.913378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.695 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.955 [2024-07-24 17:14:10.928815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.955 [2024-07-24 17:14:10.928865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.955 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.955 [2024-07-24 17:14:10.944891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.955 [2024-07-24 17:14:10.944939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.955 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.955 [2024-07-24 17:14:10.954505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.955 [2024-07-24 17:14:10.954538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.955 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:10.964635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:10.964682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:10.978532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:10.978564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:10.995606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:10.995654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:10 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.010013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.010060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.027354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.027388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.042072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.042116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.059243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.059277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.074312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.074360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.090843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.090906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.106343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.106378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.124142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.124218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.138811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.138860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.153302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.153335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.167944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.168008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:24.956 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:24.956 [2024-07-24 17:14:11.184928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:24.956 [2024-07-24 17:14:11.185013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.219 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.219 [2024-07-24 17:14:11.201926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.219 [2024-07-24 17:14:11.201980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.219 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.219 [2024-07-24 17:14:11.216864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.219 [2024-07-24 17:14:11.216914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.232320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.232367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.244567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.244603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.261649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.261699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.276827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.276860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.293239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.293274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.309778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.309814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.326094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.326168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.341989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.342037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.351796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.351846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.365962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.366017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.376797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.376846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.391629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.391677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.401492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.401539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.421453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.421521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.220 [2024-07-24 17:14:11.439426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.220 [2024-07-24 17:14:11.439474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.220 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.479 [2024-07-24 17:14:11.454208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.479 [2024-07-24 17:14:11.454256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.479 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.479 [2024-07-24 17:14:11.469602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.479 [2024-07-24 17:14:11.469651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.479 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.479 [2024-07-24 17:14:11.485431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.479 [2024-07-24 17:14:11.485481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.479 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.479 [2024-07-24 17:14:11.497433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.479 [2024-07-24 17:14:11.497469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.479 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.479 [2024-07-24 17:14:11.515002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.479 [2024-07-24 17:14:11.515068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.479 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.479 [2024-07-24 17:14:11.530051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.530124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.540246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.540279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.554280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.554313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.569521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.569556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.586501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.586548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.602089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.602158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.619032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.619083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.634672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.634707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.650708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.650757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.667435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.667467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.683818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.683868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.480 [2024-07-24 17:14:11.700686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.480 [2024-07-24 17:14:11.700737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.480 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.716621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.716667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.734063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.734136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.750083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.750147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.767147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.767206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.781795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.781844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.797282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.797333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.814887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.814943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.830271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.830318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.848946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.739 [2024-07-24 17:14:11.848995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.739 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.739 [2024-07-24 17:14:11.864128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.864166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.880685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.880734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.897981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.898030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.912374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.912409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.927845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.927883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.937951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.937985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.952649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.952698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.740 [2024-07-24 17:14:11.964666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.740 [2024-07-24 17:14:11.964714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.740 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:11.981373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:11.981445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:11 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:11.997718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:11.997778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.013625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.013664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.023146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.023221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.038445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.038480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.056107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.056154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.071418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.071454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.081820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.081867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.096027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.096061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.111991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.112040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.129567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.129640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.145221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.145269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.161961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.162010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.177766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.177814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.195030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.195081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.211077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.211160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:25.999 [2024-07-24 17:14:12.221426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:25.999 [2024-07-24 17:14:12.221473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:25.999 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.258 [2024-07-24 17:14:12.235875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.258 [2024-07-24 17:14:12.235925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.247279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.247328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.264060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.264117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.280732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.280783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.296933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.296998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.314013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.314048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.329822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.329863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.347895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.347930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.363839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.363890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.379093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.379163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.394493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.394529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.409782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.409831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.418848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.418897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.435063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.435121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.445356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.445404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.459986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.460021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.470575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.470609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.259 [2024-07-24 17:14:12.485460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.259 [2024-07-24 17:14:12.485496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.259 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.518 [2024-07-24 17:14:12.502409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.518 [2024-07-24 17:14:12.502446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.518 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.518 [2024-07-24 17:14:12.519095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.518 [2024-07-24 17:14:12.519158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.518 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.518 [2024-07-24 17:14:12.536374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.518 [2024-07-24 17:14:12.536409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.518 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.551772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.551823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.561675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.561725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.576916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.576954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.593754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.593807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.609278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.609313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.619742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.619791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.634135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.634184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.651449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.651504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.667305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.667355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.677590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.677637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.692224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.692272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.703972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.704020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.720454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.720501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.519 [2024-07-24 17:14:12.736817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.519 [2024-07-24 17:14:12.736865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.519 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.754975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.755022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.769740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.769774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.786005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.786056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.802449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.802492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.817739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.817774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.833695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.833747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.844024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.844073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.858241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.858289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.874238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.874286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.890897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.890946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.907113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.907159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.922648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.922727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.778 [2024-07-24 17:14:12.937583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.778 [2024-07-24 17:14:12.937653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.778 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.779 [2024-07-24 17:14:12.953261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.779 [2024-07-24 17:14:12.953308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.779 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.779 [2024-07-24 17:14:12.970370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.779 [2024-07-24 17:14:12.970405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.779 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.779 [2024-07-24 17:14:12.985608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.779 [2024-07-24 17:14:12.985656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.779 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.779 [2024-07-24 17:14:12.996091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.779 [2024-07-24 17:14:12.996146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.779 2024/07/24 17:14:12 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:26.779 [2024-07-24 17:14:13.010784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:26.779 [2024-07-24 17:14:13.010831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:26.779 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.037 [2024-07-24 17:14:13.020804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.037 [2024-07-24 17:14:13.020852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.037 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.037 [2024-07-24 17:14:13.035617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.037 [2024-07-24 17:14:13.035675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.037 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.037 [2024-07-24 17:14:13.051681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.037 [2024-07-24 17:14:13.051731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.037 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.037 [2024-07-24 17:14:13.067505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.037 [2024-07-24 17:14:13.067572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.037 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.084519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.084568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.101445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.101487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.116667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.116718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.133539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.133588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.150145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.150198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.166442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.166485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.184200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.184260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.200782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.200836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.217309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.217343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.233625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.233659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.245288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.245321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.038 [2024-07-24 17:14:13.261881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.038 [2024-07-24 17:14:13.261930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.038 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.296 [2024-07-24 17:14:13.277019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.296 [2024-07-24 17:14:13.277069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.296 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.296 [2024-07-24 17:14:13.293950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.296 [2024-07-24 17:14:13.294000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.296 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.296 [2024-07-24 17:14:13.308812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.296 [2024-07-24 17:14:13.308855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.327310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.327359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.341337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.341385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.356890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.356939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.367220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.367253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.381615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.381663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.397800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.397850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.414492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.414528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.432742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.432792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.447601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.447635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.463995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.464047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.478704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.478754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.493969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.494018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.508937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.508986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.297 [2024-07-24 17:14:13.524930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.297 [2024-07-24 17:14:13.524978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.297 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.555 [2024-07-24 17:14:13.540579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.555 [2024-07-24 17:14:13.540630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.555 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.555 [2024-07-24 17:14:13.557621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.555 [2024-07-24 17:14:13.557689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.572021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.572068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.588279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.588327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.603208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.603255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.617575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.617623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.632506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.632554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.648621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.648669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.664985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.665033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.682118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.682192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.697416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.697466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.709716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.709764] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.725947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.725995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.742804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.742852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.759672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.759720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.556 [2024-07-24 17:14:13.776146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.556 [2024-07-24 17:14:13.776192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.556 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.814 [2024-07-24 17:14:13.793115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.814 [2024-07-24 17:14:13.793193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.814 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.814 [2024-07-24 17:14:13.809766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.814 [2024-07-24 17:14:13.809815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.826750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.826799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.841932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.841997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.852429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.852465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.867319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.867368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.876808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.876855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.891424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.891471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.906436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.906468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.920874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.920921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.935774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.935822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.952382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.952416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.969812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.969861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:13.984493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:13.984573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:13 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:14.002966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:14.003014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:14.017256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:14.017289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:14.031916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:14.031948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:27.815 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:27.815 [2024-07-24 17:14:14.047562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:27.815 [2024-07-24 17:14:14.047613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.064068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.064130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.081738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.081787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.097321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.097358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.107862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.107911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.123197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.123231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.140957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.141006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.155012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.155062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 00:13:28.075 Latency(us) 00:13:28.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.075 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:28.075 Nvme1n1 : 5.01 11850.28 92.58 0.00 0.00 10786.21 4468.36 20852.36 00:13:28.075 =================================================================================================================== 00:13:28.075 Total : 11850.28 92.58 0.00 0.00 10786.21 4468.36 20852.36 00:13:28.075 [2024-07-24 17:14:14.164375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.164406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.176404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.176435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.188396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.188432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.200404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.200442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.212395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.212429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.224420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.075 [2024-07-24 17:14:14.224469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.075 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.075 [2024-07-24 17:14:14.236406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.236442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.076 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.076 [2024-07-24 17:14:14.248427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.248460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.076 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.076 [2024-07-24 17:14:14.260412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.260445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.076 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.076 [2024-07-24 17:14:14.272415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.272450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.076 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.076 [2024-07-24 17:14:14.284417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.284451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.076 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.076 [2024-07-24 17:14:14.296410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.296437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.076 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.076 [2024-07-24 17:14:14.308404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.076 [2024-07-24 17:14:14.308431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.320448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.320483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.332442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.332491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.344419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.344444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.356416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.356442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.368447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.368496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.380434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.380460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 [2024-07-24 17:14:14.392447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:28.335 [2024-07-24 17:14:14.392476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:28.335 2024/07/24 17:14:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:28.335 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (70742) - No such process 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 70742 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.335 delay0 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.335 17:14:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:28.595 [2024-07-24 17:14:14.596016] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:35.174 Initializing NVMe Controllers 00:13:35.174 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:35.174 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:35.174 Initialization complete. Launching workers. 00:13:35.174 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 75 00:13:35.174 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 362, failed to submit 33 00:13:35.174 success 183, unsuccess 179, failed 0 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:35.174 rmmod nvme_tcp 00:13:35.174 rmmod nvme_fabrics 00:13:35.174 rmmod nvme_keyring 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 70569 ']' 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 70569 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 70569 ']' 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 70569 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70569 00:13:35.174 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:35.175 killing process with pid 70569 00:13:35.175 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:35.175 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70569' 00:13:35.175 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 70569 00:13:35.175 17:14:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 70569 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:35.175 00:13:35.175 real 0m24.794s 00:13:35.175 user 0m40.580s 00:13:35.175 sys 0m6.483s 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 ************************************ 00:13:35.175 END TEST nvmf_zcopy 00:13:35.175 ************************************ 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:35.175 ************************************ 00:13:35.175 START TEST nvmf_nmic 00:13:35.175 ************************************ 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:35.175 * Looking for test storage... 00:13:35.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:35.175 Cannot find device "nvmf_tgt_br" 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:13:35.175 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:35.176 Cannot find device "nvmf_tgt_br2" 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:35.176 Cannot find device "nvmf_tgt_br" 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:35.176 Cannot find device "nvmf_tgt_br2" 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:35.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:35.176 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:35.176 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:35.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:13:35.435 00:13:35.435 --- 10.0.0.2 ping statistics --- 00:13:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.435 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:35.435 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.435 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:13:35.435 00:13:35.435 --- 10.0.0.3 ping statistics --- 00:13:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.435 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:13:35.435 00:13:35.435 --- 10.0.0.1 ping statistics --- 00:13:35.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.435 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=71066 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 71066 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 71066 ']' 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:35.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:35.435 17:14:21 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:35.694 [2024-07-24 17:14:21.681633] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:35.694 [2024-07-24 17:14:21.681780] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.694 [2024-07-24 17:14:21.832307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.952 [2024-07-24 17:14:21.952755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.952 [2024-07-24 17:14:21.952836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.952 [2024-07-24 17:14:21.952862] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:35.952 [2024-07-24 17:14:21.952873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:35.952 [2024-07-24 17:14:21.952883] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.952 [2024-07-24 17:14:21.953033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.952 [2024-07-24 17:14:21.953237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.952 [2024-07-24 17:14:21.953809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.952 [2024-07-24 17:14:21.953844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.519 [2024-07-24 17:14:22.679211] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.519 Malloc0 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.519 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.777 [2024-07-24 17:14:22.753204] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.777 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.777 test case1: single bdev can't be used in multiple subsystems 00:13:36.777 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:36.777 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:36.777 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.777 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.778 [2024-07-24 17:14:22.776993] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:36.778 [2024-07-24 17:14:22.777037] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:36.778 [2024-07-24 17:14:22.777050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:36.778 2024/07/24 17:14:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:13:36.778 request: 00:13:36.778 { 00:13:36.778 "method": "nvmf_subsystem_add_ns", 00:13:36.778 "params": { 00:13:36.778 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:36.778 "namespace": { 00:13:36.778 "bdev_name": "Malloc0", 00:13:36.778 "no_auto_visible": false 00:13:36.778 } 00:13:36.778 } 00:13:36.778 } 00:13:36.778 Got JSON-RPC error response 00:13:36.778 GoRPCClient: error on JSON-RPC call 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:36.778 Adding namespace failed - expected result. 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:36.778 test case2: host connect to nvmf target in multiple paths 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:36.778 [2024-07-24 17:14:22.789200] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.778 17:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:37.067 17:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:37.067 17:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:13:37.067 17:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.067 17:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:37.067 17:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:13:38.964 17:14:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:38.964 [global] 00:13:38.964 thread=1 00:13:38.964 invalidate=1 00:13:38.964 rw=write 00:13:38.964 time_based=1 00:13:38.964 runtime=1 00:13:38.964 ioengine=libaio 00:13:38.964 direct=1 00:13:38.964 bs=4096 00:13:38.964 iodepth=1 00:13:38.964 norandommap=0 00:13:38.964 numjobs=1 00:13:38.965 00:13:38.965 verify_dump=1 00:13:38.965 verify_backlog=512 00:13:38.965 verify_state_save=0 00:13:38.965 do_verify=1 00:13:38.965 verify=crc32c-intel 00:13:38.965 [job0] 00:13:38.965 filename=/dev/nvme0n1 00:13:38.965 Could not set queue depth (nvme0n1) 00:13:39.222 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.222 fio-3.35 00:13:39.222 Starting 1 thread 00:13:40.596 00:13:40.596 job0: (groupid=0, jobs=1): err= 0: pid=71171: Wed Jul 24 17:14:26 2024 00:13:40.596 read: IOPS=3254, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1001msec) 00:13:40.596 slat (nsec): min=13396, max=52026, avg=16265.29, stdev=3255.03 00:13:40.596 clat (usec): min=127, max=486, avg=145.87, stdev=15.10 00:13:40.596 lat (usec): min=141, max=501, avg=162.14, stdev=15.55 00:13:40.596 clat percentiles (usec): 00:13:40.596 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 139], 00:13:40.596 | 30.00th=[ 141], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:13:40.596 | 70.00th=[ 149], 80.00th=[ 153], 90.00th=[ 159], 95.00th=[ 167], 00:13:40.596 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 400], 99.95th=[ 420], 00:13:40.596 | 99.99th=[ 486] 00:13:40.596 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:13:40.596 slat (usec): min=19, max=109, avg=23.58, stdev= 5.14 00:13:40.596 clat (usec): min=77, max=484, avg=104.77, stdev=11.41 00:13:40.596 lat (usec): min=109, max=530, avg=128.35, stdev=13.26 00:13:40.596 clat percentiles (usec): 00:13:40.596 | 1.00th=[ 92], 5.00th=[ 95], 10.00th=[ 96], 20.00th=[ 98], 00:13:40.596 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:13:40.596 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 123], 00:13:40.596 | 99.00th=[ 135], 99.50th=[ 139], 99.90th=[ 155], 99.95th=[ 273], 00:13:40.596 | 99.99th=[ 486] 00:13:40.596 bw ( KiB/s): min=14840, max=14840, per=100.00%, avg=14840.00, stdev= 0.00, samples=1 00:13:40.596 iops : min= 3710, max= 3710, avg=3710.00, stdev= 0.00, samples=1 00:13:40.596 lat (usec) : 100=18.05%, 250=81.83%, 500=0.12% 00:13:40.596 cpu : usr=2.60%, sys=10.20%, ctx=6842, majf=0, minf=2 00:13:40.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.596 issued rwts: total=3258,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.596 00:13:40.596 Run status group 0 (all jobs): 00:13:40.596 READ: bw=12.7MiB/s (13.3MB/s), 12.7MiB/s-12.7MiB/s (13.3MB/s-13.3MB/s), io=12.7MiB (13.3MB), run=1001-1001msec 00:13:40.596 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:13:40.596 00:13:40.596 Disk stats (read/write): 00:13:40.596 nvme0n1: ios=3082/3072, merge=0/0, ticks=488/351, in_queue=839, util=91.48% 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:40.596 rmmod nvme_tcp 00:13:40.596 rmmod nvme_fabrics 00:13:40.596 rmmod nvme_keyring 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 71066 ']' 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 71066 00:13:40.596 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 71066 ']' 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 71066 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71066 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:40.597 killing process with pid 71066 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71066' 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 71066 00:13:40.597 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 71066 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.855 17:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:40.855 00:13:40.855 real 0m5.916s 00:13:40.855 user 0m19.577s 00:13:40.855 sys 0m1.442s 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:40.855 ************************************ 00:13:40.855 END TEST nvmf_nmic 00:13:40.855 ************************************ 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:40.855 ************************************ 00:13:40.855 START TEST nvmf_fio_target 00:13:40.855 ************************************ 00:13:40.855 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:41.114 * Looking for test storage... 00:13:41.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:41.114 Cannot find device "nvmf_tgt_br" 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:41.114 Cannot find device "nvmf_tgt_br2" 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:41.114 Cannot find device "nvmf_tgt_br" 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:41.114 Cannot find device "nvmf_tgt_br2" 00:13:41.114 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:41.115 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:41.115 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:41.115 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:41.373 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.373 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:13:41.373 00:13:41.373 --- 10.0.0.2 ping statistics --- 00:13:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.373 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:41.373 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:41.373 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:13:41.373 00:13:41.373 --- 10.0.0.3 ping statistics --- 00:13:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.373 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:41.373 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:41.373 00:13:41.373 --- 10.0.0.1 ping statistics --- 00:13:41.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.373 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:41.373 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=71351 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 71351 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 71351 ']' 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:41.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:41.374 17:14:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.374 [2024-07-24 17:14:27.577758] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:13:41.374 [2024-07-24 17:14:27.577880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.632 [2024-07-24 17:14:27.716424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:41.632 [2024-07-24 17:14:27.835814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.632 [2024-07-24 17:14:27.835877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.632 [2024-07-24 17:14:27.835904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.632 [2024-07-24 17:14:27.835912] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.632 [2024-07-24 17:14:27.835919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.632 [2024-07-24 17:14:27.836290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.632 [2024-07-24 17:14:27.836370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:41.632 [2024-07-24 17:14:27.836490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.632 [2024-07-24 17:14:27.836491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:42.566 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:42.823 [2024-07-24 17:14:28.869943] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:42.823 17:14:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.081 17:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:43.081 17:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.339 17:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:43.339 17:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.596 17:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:43.596 17:14:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:43.854 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:43.854 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:44.111 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:44.369 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:44.369 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:44.935 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:44.935 17:14:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:44.935 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:44.935 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:45.194 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.452 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:45.452 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:45.709 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:45.709 17:14:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.966 17:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.223 [2024-07-24 17:14:32.359367] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.223 17:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:46.481 17:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:46.738 17:14:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.997 17:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:46.997 17:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:13:46.997 17:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.997 17:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:13:46.997 17:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:13:46.997 17:14:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:13:48.896 17:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:48.896 [global] 00:13:48.896 thread=1 00:13:48.896 invalidate=1 00:13:48.896 rw=write 00:13:48.896 time_based=1 00:13:48.896 runtime=1 00:13:48.896 ioengine=libaio 00:13:48.896 direct=1 00:13:48.896 bs=4096 00:13:48.896 iodepth=1 00:13:48.896 norandommap=0 00:13:48.896 numjobs=1 00:13:48.896 00:13:48.896 verify_dump=1 00:13:48.896 verify_backlog=512 00:13:48.896 verify_state_save=0 00:13:48.896 do_verify=1 00:13:48.896 verify=crc32c-intel 00:13:48.896 [job0] 00:13:48.896 filename=/dev/nvme0n1 00:13:48.896 [job1] 00:13:48.896 filename=/dev/nvme0n2 00:13:48.896 [job2] 00:13:48.896 filename=/dev/nvme0n3 00:13:48.896 [job3] 00:13:48.896 filename=/dev/nvme0n4 00:13:49.155 Could not set queue depth (nvme0n1) 00:13:49.155 Could not set queue depth (nvme0n2) 00:13:49.155 Could not set queue depth (nvme0n3) 00:13:49.155 Could not set queue depth (nvme0n4) 00:13:49.155 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.155 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.155 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.155 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:49.155 fio-3.35 00:13:49.155 Starting 4 threads 00:13:50.531 00:13:50.531 job0: (groupid=0, jobs=1): err= 0: pid=71651: Wed Jul 24 17:14:36 2024 00:13:50.531 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:13:50.531 slat (nsec): min=13026, max=39075, avg=15190.16, stdev=2297.74 00:13:50.531 clat (usec): min=134, max=1583, avg=157.24, stdev=27.25 00:13:50.531 lat (usec): min=148, max=1598, avg=172.43, stdev=27.39 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:13:50.531 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:13:50.531 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:13:50.531 | 99.00th=[ 182], 99.50th=[ 188], 99.90th=[ 198], 99.95th=[ 231], 00:13:50.531 | 99.99th=[ 1582] 00:13:50.531 write: IOPS=3188, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:13:50.531 slat (usec): min=18, max=101, avg=21.79, stdev= 3.96 00:13:50.531 clat (usec): min=99, max=257, avg=122.39, stdev= 9.81 00:13:50.531 lat (usec): min=119, max=359, avg=144.18, stdev=11.05 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 104], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:13:50.531 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 122], 60.00th=[ 124], 00:13:50.531 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 135], 95.00th=[ 141], 00:13:50.531 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 194], 00:13:50.531 | 99.99th=[ 258] 00:13:50.531 bw ( KiB/s): min=12624, max=12624, per=26.58%, avg=12624.00, stdev= 0.00, samples=1 00:13:50.531 iops : min= 3156, max= 3156, avg=3156.00, stdev= 0.00, samples=1 00:13:50.531 lat (usec) : 100=0.02%, 250=99.95%, 500=0.02% 00:13:50.531 lat (msec) : 2=0.02% 00:13:50.531 cpu : usr=1.60%, sys=9.30%, ctx=6264, majf=0, minf=7 00:13:50.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.531 issued rwts: total=3072,3192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.531 job1: (groupid=0, jobs=1): err= 0: pid=71652: Wed Jul 24 17:14:36 2024 00:13:50.531 read: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec) 00:13:50.531 slat (nsec): min=12384, max=43577, avg=15676.77, stdev=2564.25 00:13:50.531 clat (usec): min=135, max=6275, avg=172.96, stdev=197.96 00:13:50.531 lat (usec): min=149, max=6288, avg=188.64, stdev=198.24 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:13:50.531 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:13:50.531 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 188], 00:13:50.531 | 99.00th=[ 314], 99.50th=[ 371], 99.90th=[ 4359], 99.95th=[ 4948], 00:13:50.531 | 99.99th=[ 6259] 00:13:50.531 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:50.531 slat (nsec): min=11795, max=90474, avg=22359.47, stdev=4767.73 00:13:50.531 clat (usec): min=102, max=478, avg=140.34, stdev=40.09 00:13:50.531 lat (usec): min=123, max=510, avg=162.69, stdev=39.81 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 115], 20.00th=[ 119], 00:13:50.531 | 30.00th=[ 122], 40.00th=[ 124], 50.00th=[ 127], 60.00th=[ 130], 00:13:50.531 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 217], 95.00th=[ 245], 00:13:50.531 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 334], 00:13:50.531 | 99.99th=[ 478] 00:13:50.531 bw ( KiB/s): min=12288, max=12288, per=25.87%, avg=12288.00, stdev= 0.00, samples=1 00:13:50.531 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:50.531 lat (usec) : 250=95.98%, 500=3.88%, 750=0.04% 00:13:50.531 lat (msec) : 4=0.05%, 10=0.05% 00:13:50.531 cpu : usr=2.20%, sys=8.10%, ctx=5671, majf=0, minf=13 00:13:50.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.531 issued rwts: total=2599,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.531 job2: (groupid=0, jobs=1): err= 0: pid=71653: Wed Jul 24 17:14:36 2024 00:13:50.531 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:50.531 slat (nsec): min=13300, max=62056, avg=18357.43, stdev=5600.77 00:13:50.531 clat (usec): min=155, max=266, avg=182.64, stdev=13.33 00:13:50.531 lat (usec): min=170, max=280, avg=200.99, stdev=15.19 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:13:50.531 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:13:50.531 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 206], 00:13:50.531 | 99.00th=[ 225], 99.50th=[ 239], 99.90th=[ 253], 99.95th=[ 265], 00:13:50.531 | 99.99th=[ 265] 00:13:50.531 write: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec); 0 zone resets 00:13:50.531 slat (nsec): min=18648, max=87367, avg=24814.83, stdev=5583.33 00:13:50.531 clat (usec): min=114, max=1608, avg=142.82, stdev=33.13 00:13:50.531 lat (usec): min=134, max=1629, avg=167.64, stdev=33.97 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 121], 5.00th=[ 126], 10.00th=[ 129], 20.00th=[ 133], 00:13:50.531 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 143], 00:13:50.531 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 159], 95.00th=[ 165], 00:13:50.531 | 99.00th=[ 182], 99.50th=[ 196], 99.90th=[ 383], 99.95th=[ 799], 00:13:50.531 | 99.99th=[ 1614] 00:13:50.531 bw ( KiB/s): min=12288, max=12288, per=25.87%, avg=12288.00, stdev= 0.00, samples=1 00:13:50.531 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:50.531 lat (usec) : 250=99.85%, 500=0.11%, 1000=0.02% 00:13:50.531 lat (msec) : 2=0.02% 00:13:50.531 cpu : usr=2.40%, sys=8.60%, ctx=5396, majf=0, minf=8 00:13:50.531 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.531 issued rwts: total=2560,2836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.531 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.531 job3: (groupid=0, jobs=1): err= 0: pid=71654: Wed Jul 24 17:14:36 2024 00:13:50.531 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:13:50.531 slat (nsec): min=12527, max=56930, avg=15989.66, stdev=2710.05 00:13:50.531 clat (usec): min=150, max=495, avg=188.65, stdev=40.99 00:13:50.531 lat (usec): min=165, max=508, avg=204.64, stdev=41.23 00:13:50.531 clat percentiles (usec): 00:13:50.531 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:13:50.531 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:13:50.531 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 253], 95.00th=[ 277], 00:13:50.531 | 99.00th=[ 355], 99.50th=[ 367], 99.90th=[ 404], 99.95th=[ 474], 00:13:50.531 | 99.99th=[ 498] 00:13:50.531 write: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec); 0 zone resets 00:13:50.531 slat (nsec): min=12227, max=93516, avg=22368.20, stdev=4102.39 00:13:50.531 clat (usec): min=111, max=1673, avg=145.28, stdev=42.46 00:13:50.531 lat (usec): min=133, max=1693, avg=167.64, stdev=42.59 00:13:50.531 clat percentiles (usec): 00:13:50.532 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 126], 20.00th=[ 130], 00:13:50.532 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:13:50.532 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 223], 00:13:50.532 | 99.00th=[ 262], 99.50th=[ 273], 99.90th=[ 478], 99.95th=[ 693], 00:13:50.532 | 99.99th=[ 1680] 00:13:50.532 bw ( KiB/s): min=12288, max=12288, per=25.87%, avg=12288.00, stdev= 0.00, samples=1 00:13:50.532 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:50.532 lat (usec) : 250=94.35%, 500=5.61%, 750=0.02% 00:13:50.532 lat (msec) : 2=0.02% 00:13:50.532 cpu : usr=1.20%, sys=8.40%, ctx=5348, majf=0, minf=7 00:13:50.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:50.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.532 issued rwts: total=2560,2787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:50.532 00:13:50.532 Run status group 0 (all jobs): 00:13:50.532 READ: bw=42.1MiB/s (44.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=42.2MiB (44.2MB), run=1001-1001msec 00:13:50.532 WRITE: bw=46.4MiB/s (48.6MB/s), 10.9MiB/s-12.5MiB/s (11.4MB/s-13.1MB/s), io=46.4MiB (48.7MB), run=1001-1001msec 00:13:50.532 00:13:50.532 Disk stats (read/write): 00:13:50.532 nvme0n1: ios=2610/2878, merge=0/0, ticks=446/388, in_queue=834, util=88.18% 00:13:50.532 nvme0n2: ios=2605/2565, merge=0/0, ticks=490/345, in_queue=835, util=88.87% 00:13:50.532 nvme0n3: ios=2150/2560, merge=0/0, ticks=409/388, in_queue=797, util=89.30% 00:13:50.532 nvme0n4: ios=2304/2560, merge=0/0, ticks=429/382, in_queue=811, util=89.76% 00:13:50.532 17:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:50.532 [global] 00:13:50.532 thread=1 00:13:50.532 invalidate=1 00:13:50.532 rw=randwrite 00:13:50.532 time_based=1 00:13:50.532 runtime=1 00:13:50.532 ioengine=libaio 00:13:50.532 direct=1 00:13:50.532 bs=4096 00:13:50.532 iodepth=1 00:13:50.532 norandommap=0 00:13:50.532 numjobs=1 00:13:50.532 00:13:50.532 verify_dump=1 00:13:50.532 verify_backlog=512 00:13:50.532 verify_state_save=0 00:13:50.532 do_verify=1 00:13:50.532 verify=crc32c-intel 00:13:50.532 [job0] 00:13:50.532 filename=/dev/nvme0n1 00:13:50.532 [job1] 00:13:50.532 filename=/dev/nvme0n2 00:13:50.532 [job2] 00:13:50.532 filename=/dev/nvme0n3 00:13:50.532 [job3] 00:13:50.532 filename=/dev/nvme0n4 00:13:50.532 Could not set queue depth (nvme0n1) 00:13:50.532 Could not set queue depth (nvme0n2) 00:13:50.532 Could not set queue depth (nvme0n3) 00:13:50.532 Could not set queue depth (nvme0n4) 00:13:50.532 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.532 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.532 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.532 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:50.532 fio-3.35 00:13:50.532 Starting 4 threads 00:13:51.908 00:13:51.908 job0: (groupid=0, jobs=1): err= 0: pid=71707: Wed Jul 24 17:14:37 2024 00:13:51.908 read: IOPS=2756, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:13:51.908 slat (nsec): min=13599, max=60436, avg=19843.03, stdev=6194.88 00:13:51.908 clat (usec): min=137, max=2099, avg=162.69, stdev=41.94 00:13:51.908 lat (usec): min=152, max=2114, avg=182.53, stdev=42.75 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 153], 00:13:51.908 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:13:51.908 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:13:51.908 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 660], 99.95th=[ 685], 00:13:51.908 | 99.99th=[ 2114] 00:13:51.908 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:51.908 slat (usec): min=20, max=123, avg=29.46, stdev= 9.41 00:13:51.908 clat (usec): min=99, max=232, avg=127.99, stdev=11.11 00:13:51.908 lat (usec): min=120, max=355, avg=157.46, stdev=16.86 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 120], 00:13:51.908 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 130], 00:13:51.908 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:13:51.908 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 192], 00:13:51.908 | 99.99th=[ 233] 00:13:51.908 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:13:51.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:51.908 lat (usec) : 100=0.02%, 250=99.85%, 500=0.07%, 750=0.05% 00:13:51.908 lat (msec) : 4=0.02% 00:13:51.908 cpu : usr=2.80%, sys=10.90%, ctx=5831, majf=0, minf=9 00:13:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 issued rwts: total=2759,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:51.908 job1: (groupid=0, jobs=1): err= 0: pid=71708: Wed Jul 24 17:14:37 2024 00:13:51.908 read: IOPS=2946, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:13:51.908 slat (usec): min=12, max=168, avg=17.53, stdev= 9.53 00:13:51.908 clat (usec): min=39, max=1908, avg=162.72, stdev=35.73 00:13:51.908 lat (usec): min=150, max=1923, avg=180.25, stdev=36.74 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:13:51.908 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:13:51.908 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:13:51.908 | 99.00th=[ 210], 99.50th=[ 233], 99.90th=[ 326], 99.95th=[ 465], 00:13:51.908 | 99.99th=[ 1909] 00:13:51.908 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:51.908 slat (usec): min=18, max=101, avg=22.41, stdev= 6.25 00:13:51.908 clat (usec): min=98, max=201, avg=126.44, stdev=10.71 00:13:51.908 lat (usec): min=120, max=303, avg=148.86, stdev=13.47 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 115], 20.00th=[ 119], 00:13:51.908 | 30.00th=[ 121], 40.00th=[ 124], 50.00th=[ 126], 60.00th=[ 128], 00:13:51.908 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:13:51.908 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 178], 99.95th=[ 180], 00:13:51.908 | 99.99th=[ 202] 00:13:51.908 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:13:51.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:51.908 lat (usec) : 50=0.05%, 100=0.07%, 250=99.68%, 500=0.18% 00:13:51.908 lat (msec) : 2=0.02% 00:13:51.908 cpu : usr=2.50%, sys=8.90%, ctx=6041, majf=0, minf=12 00:13:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 issued rwts: total=2949,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:51.908 job2: (groupid=0, jobs=1): err= 0: pid=71709: Wed Jul 24 17:14:37 2024 00:13:51.908 read: IOPS=2576, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:13:51.908 slat (nsec): min=13537, max=51291, avg=16478.38, stdev=2692.74 00:13:51.908 clat (usec): min=150, max=1830, avg=175.49, stdev=37.64 00:13:51.908 lat (usec): min=164, max=1848, avg=191.97, stdev=38.03 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:13:51.908 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:13:51.908 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:13:51.908 | 99.00th=[ 215], 99.50th=[ 302], 99.90th=[ 457], 99.95th=[ 586], 00:13:51.908 | 99.99th=[ 1827] 00:13:51.908 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:13:51.908 slat (nsec): min=19111, max=73765, avg=23199.48, stdev=3862.79 00:13:51.908 clat (usec): min=111, max=340, avg=137.91, stdev=11.90 00:13:51.908 lat (usec): min=132, max=361, avg=161.11, stdev=12.96 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 118], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 130], 00:13:51.908 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 139], 00:13:51.908 | 70.00th=[ 143], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 157], 00:13:51.908 | 99.00th=[ 169], 99.50th=[ 174], 99.90th=[ 239], 99.95th=[ 273], 00:13:51.908 | 99.99th=[ 343] 00:13:51.908 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:13:51.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:51.908 lat (usec) : 250=99.63%, 500=0.34%, 750=0.02% 00:13:51.908 lat (msec) : 2=0.02% 00:13:51.908 cpu : usr=1.70%, sys=9.00%, ctx=5652, majf=0, minf=11 00:13:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 issued rwts: total=2579,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:51.908 job3: (groupid=0, jobs=1): err= 0: pid=71710: Wed Jul 24 17:14:37 2024 00:13:51.908 read: IOPS=2540, BW=9.92MiB/s (10.4MB/s)(9.93MiB/1001msec) 00:13:51.908 slat (nsec): min=13175, max=65540, avg=17332.89, stdev=4523.55 00:13:51.908 clat (usec): min=153, max=2629, avg=192.89, stdev=78.71 00:13:51.908 lat (usec): min=168, max=2653, avg=210.22, stdev=79.06 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:13:51.908 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:13:51.908 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 219], 95.00th=[ 227], 00:13:51.908 | 99.00th=[ 258], 99.50th=[ 433], 99.90th=[ 1795], 99.95th=[ 2212], 00:13:51.908 | 99.99th=[ 2638] 00:13:51.908 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:13:51.908 slat (usec): min=18, max=105, avg=25.28, stdev= 6.92 00:13:51.908 clat (usec): min=117, max=433, avg=153.00, stdev=25.70 00:13:51.908 lat (usec): min=138, max=462, avg=178.28, stdev=27.57 00:13:51.908 clat percentiles (usec): 00:13:51.908 | 1.00th=[ 123], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:13:51.908 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 151], 60.00th=[ 155], 00:13:51.908 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 176], 95.00th=[ 184], 00:13:51.908 | 99.00th=[ 273], 99.50th=[ 326], 99.90th=[ 392], 99.95th=[ 412], 00:13:51.908 | 99.99th=[ 433] 00:13:51.908 bw ( KiB/s): min=12288, max=12288, per=26.11%, avg=12288.00, stdev= 0.00, samples=1 00:13:51.908 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:13:51.908 lat (usec) : 250=98.65%, 500=1.14%, 750=0.14%, 1000=0.02% 00:13:51.908 lat (msec) : 2=0.02%, 4=0.04% 00:13:51.908 cpu : usr=1.40%, sys=8.80%, ctx=5104, majf=0, minf=13 00:13:51.908 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.908 issued rwts: total=2543,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.908 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:51.908 00:13:51.908 Run status group 0 (all jobs): 00:13:51.909 READ: bw=42.3MiB/s (44.3MB/s), 9.92MiB/s-11.5MiB/s (10.4MB/s-12.1MB/s), io=42.3MiB (44.4MB), run=1001-1001msec 00:13:51.909 WRITE: bw=46.0MiB/s (48.2MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=46.0MiB (48.2MB), run=1001-1001msec 00:13:51.909 00:13:51.909 Disk stats (read/write): 00:13:51.909 nvme0n1: ios=2598/2560, merge=0/0, ticks=447/349, in_queue=796, util=89.18% 00:13:51.909 nvme0n2: ios=2609/2663, merge=0/0, ticks=474/367, in_queue=841, util=89.71% 00:13:51.909 nvme0n3: ios=2324/2560, merge=0/0, ticks=419/377, in_queue=796, util=89.16% 00:13:51.909 nvme0n4: ios=2048/2369, merge=0/0, ticks=408/398, in_queue=806, util=89.69% 00:13:51.909 17:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:51.909 [global] 00:13:51.909 thread=1 00:13:51.909 invalidate=1 00:13:51.909 rw=write 00:13:51.909 time_based=1 00:13:51.909 runtime=1 00:13:51.909 ioengine=libaio 00:13:51.909 direct=1 00:13:51.909 bs=4096 00:13:51.909 iodepth=128 00:13:51.909 norandommap=0 00:13:51.909 numjobs=1 00:13:51.909 00:13:51.909 verify_dump=1 00:13:51.909 verify_backlog=512 00:13:51.909 verify_state_save=0 00:13:51.909 do_verify=1 00:13:51.909 verify=crc32c-intel 00:13:51.909 [job0] 00:13:51.909 filename=/dev/nvme0n1 00:13:51.909 [job1] 00:13:51.909 filename=/dev/nvme0n2 00:13:51.909 [job2] 00:13:51.909 filename=/dev/nvme0n3 00:13:51.909 [job3] 00:13:51.909 filename=/dev/nvme0n4 00:13:51.909 Could not set queue depth (nvme0n1) 00:13:51.909 Could not set queue depth (nvme0n2) 00:13:51.909 Could not set queue depth (nvme0n3) 00:13:51.909 Could not set queue depth (nvme0n4) 00:13:51.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.909 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.909 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.909 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:51.909 fio-3.35 00:13:51.909 Starting 4 threads 00:13:53.284 00:13:53.284 job0: (groupid=0, jobs=1): err= 0: pid=71770: Wed Jul 24 17:14:39 2024 00:13:53.284 read: IOPS=5455, BW=21.3MiB/s (22.3MB/s)(21.4MiB/1003msec) 00:13:53.284 slat (usec): min=5, max=2797, avg=87.68, stdev=405.78 00:13:53.284 clat (usec): min=402, max=15065, avg=11561.67, stdev=1176.07 00:13:53.284 lat (usec): min=2830, max=15529, avg=11649.35, stdev=1117.28 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[ 5800], 5.00th=[ 9634], 10.00th=[10814], 20.00th=[11207], 00:13:53.284 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:13:53.284 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[12911], 00:13:53.284 | 99.00th=[13435], 99.50th=[14091], 99.90th=[14091], 99.95th=[14877], 00:13:53.284 | 99.99th=[15008] 00:13:53.284 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:13:53.284 slat (usec): min=11, max=2865, avg=85.34, stdev=360.84 00:13:53.284 clat (usec): min=8703, max=14224, avg=11265.12, stdev=1243.40 00:13:53.284 lat (usec): min=8732, max=14246, avg=11350.46, stdev=1240.95 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9634], 20.00th=[ 9896], 00:13:53.284 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:13:53.284 | 70.00th=[11994], 80.00th=[12256], 90.00th=[12911], 95.00th=[13566], 00:13:53.284 | 99.00th=[13960], 99.50th=[14091], 99.90th=[14222], 99.95th=[14222], 00:13:53.284 | 99.99th=[14222] 00:13:53.284 bw ( KiB/s): min=21848, max=23208, per=32.44%, avg=22528.00, stdev=961.67, samples=2 00:13:53.284 iops : min= 5462, max= 5802, avg=5632.00, stdev=240.42, samples=2 00:13:53.284 lat (usec) : 500=0.01% 00:13:53.284 lat (msec) : 4=0.29%, 10=14.57%, 20=85.13% 00:13:53.284 cpu : usr=5.49%, sys=13.57%, ctx=594, majf=0, minf=11 00:13:53.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:53.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.284 issued rwts: total=5472,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.284 job1: (groupid=0, jobs=1): err= 0: pid=71771: Wed Jul 24 17:14:39 2024 00:13:53.284 read: IOPS=5467, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1002msec) 00:13:53.284 slat (usec): min=5, max=3941, avg=87.28, stdev=448.68 00:13:53.284 clat (usec): min=441, max=16523, avg=11498.42, stdev=1426.51 00:13:53.284 lat (usec): min=3490, max=16560, avg=11585.70, stdev=1457.39 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[ 7570], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10814], 00:13:53.284 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:13:53.284 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13173], 95.00th=[13829], 00:13:53.284 | 99.00th=[14615], 99.50th=[15270], 99.90th=[16057], 99.95th=[16057], 00:13:53.284 | 99.99th=[16581] 00:13:53.284 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:13:53.284 slat (usec): min=10, max=3586, avg=85.68, stdev=374.52 00:13:53.284 clat (usec): min=7739, max=15483, avg=11298.10, stdev=1207.03 00:13:53.284 lat (usec): min=7756, max=15525, avg=11383.78, stdev=1196.73 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[ 8356], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[10683], 00:13:53.284 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:13:53.284 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12649], 95.00th=[13173], 00:13:53.284 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15270], 99.95th=[15270], 00:13:53.284 | 99.99th=[15533] 00:13:53.284 bw ( KiB/s): min=21928, max=23128, per=32.44%, avg=22528.00, stdev=848.53, samples=2 00:13:53.284 iops : min= 5482, max= 5782, avg=5632.00, stdev=212.13, samples=2 00:13:53.284 lat (usec) : 500=0.01% 00:13:53.284 lat (msec) : 4=0.26%, 10=11.16%, 20=88.57% 00:13:53.284 cpu : usr=4.10%, sys=14.99%, ctx=533, majf=0, minf=19 00:13:53.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:53.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.284 issued rwts: total=5478,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.284 job2: (groupid=0, jobs=1): err= 0: pid=71772: Wed Jul 24 17:14:39 2024 00:13:53.284 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:13:53.284 slat (usec): min=4, max=8724, avg=169.78, stdev=774.32 00:13:53.284 clat (usec): min=8316, max=35551, avg=21501.94, stdev=6022.98 00:13:53.284 lat (usec): min=8329, max=35567, avg=21671.72, stdev=6034.41 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[11731], 5.00th=[13566], 10.00th=[14222], 20.00th=[14877], 00:13:53.284 | 30.00th=[15664], 40.00th=[19792], 50.00th=[22938], 60.00th=[24773], 00:13:53.284 | 70.00th=[25560], 80.00th=[26870], 90.00th=[28967], 95.00th=[30540], 00:13:53.284 | 99.00th=[33424], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:13:53.284 | 99.99th=[35390] 00:13:53.284 write: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1004msec); 0 zone resets 00:13:53.284 slat (usec): min=4, max=6289, avg=146.52, stdev=548.73 00:13:53.284 clat (usec): min=1971, max=33670, avg=19554.20, stdev=6123.19 00:13:53.284 lat (usec): min=4619, max=33740, avg=19700.71, stdev=6144.33 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[11600], 5.00th=[12649], 10.00th=[12911], 20.00th=[13435], 00:13:53.284 | 30.00th=[14091], 40.00th=[15926], 50.00th=[16712], 60.00th=[22676], 00:13:53.284 | 70.00th=[25035], 80.00th=[25560], 90.00th=[26346], 95.00th=[30278], 00:13:53.284 | 99.00th=[32637], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:13:53.284 | 99.99th=[33817] 00:13:53.284 bw ( KiB/s): min= 8360, max=16248, per=17.72%, avg=12304.00, stdev=5577.66, samples=2 00:13:53.284 iops : min= 2090, max= 4062, avg=3076.00, stdev=1394.41, samples=2 00:13:53.284 lat (msec) : 2=0.02%, 10=0.55%, 20=46.88%, 50=52.55% 00:13:53.284 cpu : usr=3.49%, sys=8.67%, ctx=740, majf=0, minf=15 00:13:53.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:53.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.284 issued rwts: total=3072,3095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.284 job3: (groupid=0, jobs=1): err= 0: pid=71773: Wed Jul 24 17:14:39 2024 00:13:53.284 read: IOPS=2985, BW=11.7MiB/s (12.2MB/s)(11.7MiB/1003msec) 00:13:53.284 slat (usec): min=3, max=7620, avg=172.31, stdev=758.08 00:13:53.284 clat (usec): min=2032, max=35346, avg=22263.46, stdev=5955.24 00:13:53.284 lat (usec): min=5033, max=35632, avg=22435.76, stdev=5963.42 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[ 8979], 5.00th=[14091], 10.00th=[14746], 20.00th=[15139], 00:13:53.284 | 30.00th=[15664], 40.00th=[21890], 50.00th=[24773], 60.00th=[25297], 00:13:53.284 | 70.00th=[26084], 80.00th=[27132], 90.00th=[29230], 95.00th=[30540], 00:13:53.284 | 99.00th=[32113], 99.50th=[32637], 99.90th=[33162], 99.95th=[34341], 00:13:53.284 | 99.99th=[35390] 00:13:53.284 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:13:53.284 slat (usec): min=10, max=7436, avg=149.99, stdev=597.55 00:13:53.284 clat (usec): min=10162, max=31009, avg=19515.24, stdev=5575.19 00:13:53.284 lat (usec): min=10714, max=31030, avg=19665.23, stdev=5597.97 00:13:53.284 clat percentiles (usec): 00:13:53.284 | 1.00th=[11600], 5.00th=[12256], 10.00th=[12649], 20.00th=[13304], 00:13:53.284 | 30.00th=[14746], 40.00th=[15664], 50.00th=[19530], 60.00th=[23725], 00:13:53.284 | 70.00th=[24773], 80.00th=[25560], 90.00th=[25822], 95.00th=[26608], 00:13:53.284 | 99.00th=[27919], 99.50th=[29492], 99.90th=[31065], 99.95th=[31065], 00:13:53.284 | 99.99th=[31065] 00:13:53.284 bw ( KiB/s): min= 8920, max=15687, per=17.72%, avg=12303.50, stdev=4784.99, samples=2 00:13:53.284 iops : min= 2230, max= 3921, avg=3075.50, stdev=1195.72, samples=2 00:13:53.284 lat (msec) : 4=0.02%, 10=0.53%, 20=42.27%, 50=57.19% 00:13:53.284 cpu : usr=3.59%, sys=7.68%, ctx=698, majf=0, minf=5 00:13:53.284 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:53.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:53.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:53.284 issued rwts: total=2994,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:53.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:53.284 00:13:53.284 Run status group 0 (all jobs): 00:13:53.284 READ: bw=66.2MiB/s (69.4MB/s), 11.7MiB/s-21.4MiB/s (12.2MB/s-22.4MB/s), io=66.5MiB (69.7MB), run=1002-1004msec 00:13:53.284 WRITE: bw=67.8MiB/s (71.1MB/s), 12.0MiB/s-22.0MiB/s (12.5MB/s-23.0MB/s), io=68.1MiB (71.4MB), run=1002-1004msec 00:13:53.284 00:13:53.284 Disk stats (read/write): 00:13:53.284 nvme0n1: ios=4658/4978, merge=0/0, ticks=12490/12074, in_queue=24564, util=88.88% 00:13:53.284 nvme0n2: ios=4657/4983, merge=0/0, ticks=16438/16307, in_queue=32745, util=89.30% 00:13:53.284 nvme0n3: ios=2560/2943, merge=0/0, ticks=12636/12732, in_queue=25368, util=89.23% 00:13:53.284 nvme0n4: ios=2560/2804, merge=0/0, ticks=13013/12195, in_queue=25208, util=88.96% 00:13:53.284 17:14:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:53.284 [global] 00:13:53.284 thread=1 00:13:53.284 invalidate=1 00:13:53.284 rw=randwrite 00:13:53.284 time_based=1 00:13:53.284 runtime=1 00:13:53.284 ioengine=libaio 00:13:53.284 direct=1 00:13:53.284 bs=4096 00:13:53.284 iodepth=128 00:13:53.284 norandommap=0 00:13:53.285 numjobs=1 00:13:53.285 00:13:53.285 verify_dump=1 00:13:53.285 verify_backlog=512 00:13:53.285 verify_state_save=0 00:13:53.285 do_verify=1 00:13:53.285 verify=crc32c-intel 00:13:53.285 [job0] 00:13:53.285 filename=/dev/nvme0n1 00:13:53.285 [job1] 00:13:53.285 filename=/dev/nvme0n2 00:13:53.285 [job2] 00:13:53.285 filename=/dev/nvme0n3 00:13:53.285 [job3] 00:13:53.285 filename=/dev/nvme0n4 00:13:53.285 Could not set queue depth (nvme0n1) 00:13:53.285 Could not set queue depth (nvme0n2) 00:13:53.285 Could not set queue depth (nvme0n3) 00:13:53.285 Could not set queue depth (nvme0n4) 00:13:53.285 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.285 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.285 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.285 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:53.285 fio-3.35 00:13:53.285 Starting 4 threads 00:13:54.660 00:13:54.660 job0: (groupid=0, jobs=1): err= 0: pid=71827: Wed Jul 24 17:14:40 2024 00:13:54.660 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:13:54.660 slat (usec): min=8, max=2681, avg=84.99, stdev=384.38 00:13:54.660 clat (usec): min=8448, max=13447, avg=11283.74, stdev=665.69 00:13:54.660 lat (usec): min=9149, max=14107, avg=11368.72, stdev=568.93 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[10945], 00:13:54.660 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11469], 00:13:54.660 | 70.00th=[11600], 80.00th=[11731], 90.00th=[11863], 95.00th=[12125], 00:13:54.660 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13042], 99.95th=[13042], 00:13:54.660 | 99.99th=[13435] 00:13:54.660 write: IOPS=5861, BW=22.9MiB/s (24.0MB/s)(22.9MiB/1001msec); 0 zone resets 00:13:54.660 slat (usec): min=10, max=2669, avg=81.35, stdev=330.70 00:13:54.660 clat (usec): min=256, max=13039, avg=10735.54, stdev=1320.14 00:13:54.660 lat (usec): min=2314, max=13307, avg=10816.89, stdev=1319.46 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[ 5735], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9634], 00:13:54.660 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10945], 60.00th=[11338], 00:13:54.660 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12256], 95.00th=[12387], 00:13:54.660 | 99.00th=[12780], 99.50th=[12911], 99.90th=[13042], 99.95th=[13042], 00:13:54.660 | 99.99th=[13042] 00:13:54.660 bw ( KiB/s): min=21344, max=24625, per=34.78%, avg=22984.50, stdev=2320.02, samples=2 00:13:54.660 iops : min= 5336, max= 6156, avg=5746.00, stdev=579.83, samples=2 00:13:54.660 lat (usec) : 500=0.01% 00:13:54.660 lat (msec) : 4=0.32%, 10=20.65%, 20=79.02% 00:13:54.660 cpu : usr=4.40%, sys=16.60%, ctx=606, majf=0, minf=5 00:13:54.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:54.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:54.660 issued rwts: total=5632,5867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:54.660 job1: (groupid=0, jobs=1): err= 0: pid=71828: Wed Jul 24 17:14:40 2024 00:13:54.660 read: IOPS=2286, BW=9145KiB/s (9365kB/s)(9200KiB/1006msec) 00:13:54.660 slat (usec): min=12, max=13480, avg=180.86, stdev=878.26 00:13:54.660 clat (usec): min=1289, max=37901, avg=21639.49, stdev=5583.34 00:13:54.660 lat (usec): min=9082, max=37941, avg=21820.35, stdev=5635.31 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[ 9503], 5.00th=[16581], 10.00th=[17695], 20.00th=[17957], 00:13:54.660 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[20055], 00:13:54.660 | 70.00th=[23462], 80.00th=[27395], 90.00th=[30278], 95.00th=[33817], 00:13:54.660 | 99.00th=[35914], 99.50th=[35914], 99.90th=[37487], 99.95th=[38011], 00:13:54.660 | 99.99th=[38011] 00:13:54.660 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:13:54.660 slat (usec): min=17, max=9039, avg=219.89, stdev=779.46 00:13:54.660 clat (usec): min=16356, max=61846, avg=29963.30, stdev=10392.13 00:13:54.660 lat (usec): min=16389, max=61874, avg=30183.19, stdev=10449.71 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[19006], 5.00th=[22152], 10.00th=[22676], 20.00th=[23462], 00:13:54.660 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24511], 60.00th=[26870], 00:13:54.660 | 70.00th=[28705], 80.00th=[36439], 90.00th=[49546], 95.00th=[55313], 00:13:54.660 | 99.00th=[60556], 99.50th=[61080], 99.90th=[61604], 99.95th=[61604], 00:13:54.660 | 99.99th=[61604] 00:13:54.660 bw ( KiB/s): min=10084, max=10416, per=15.51%, avg=10250.00, stdev=234.76, samples=2 00:13:54.660 iops : min= 2521, max= 2604, avg=2562.50, stdev=58.69, samples=2 00:13:54.660 lat (msec) : 2=0.02%, 10=0.86%, 20=28.13%, 50=66.36%, 100=4.63% 00:13:54.660 cpu : usr=2.89%, sys=8.56%, ctx=359, majf=0, minf=11 00:13:54.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:13:54.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:54.660 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:54.660 job2: (groupid=0, jobs=1): err= 0: pid=71829: Wed Jul 24 17:14:40 2024 00:13:54.660 read: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1001msec) 00:13:54.660 slat (usec): min=5, max=4078, avg=99.11, stdev=512.80 00:13:54.660 clat (usec): min=680, max=17224, avg=12727.22, stdev=1362.11 00:13:54.660 lat (usec): min=696, max=17255, avg=12826.32, stdev=1408.22 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[11207], 20.00th=[12387], 00:13:54.660 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12780], 60.00th=[12911], 00:13:54.660 | 70.00th=[13042], 80.00th=[13435], 90.00th=[14091], 95.00th=[14615], 00:13:54.660 | 99.00th=[16057], 99.50th=[16319], 99.90th=[16712], 99.95th=[17171], 00:13:54.660 | 99.99th=[17171] 00:13:54.660 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:13:54.660 slat (usec): min=10, max=5601, avg=93.99, stdev=392.98 00:13:54.660 clat (usec): min=8708, max=17189, avg=12684.38, stdev=1326.72 00:13:54.660 lat (usec): min=8732, max=17206, avg=12778.37, stdev=1305.26 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11994], 00:13:54.660 | 30.00th=[12387], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:13:54.660 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13829], 95.00th=[14746], 00:13:54.660 | 99.00th=[16057], 99.50th=[16188], 99.90th=[17171], 99.95th=[17171], 00:13:54.660 | 99.99th=[17171] 00:13:54.660 bw ( KiB/s): min=20480, max=20521, per=31.02%, avg=20500.50, stdev=28.99, samples=2 00:13:54.660 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:13:54.660 lat (usec) : 750=0.03% 00:13:54.660 lat (msec) : 10=4.35%, 20=95.62% 00:13:54.660 cpu : usr=5.10%, sys=13.10%, ctx=528, majf=0, minf=10 00:13:54.660 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:54.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:54.660 issued rwts: total=4835,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.660 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:54.660 job3: (groupid=0, jobs=1): err= 0: pid=71830: Wed Jul 24 17:14:40 2024 00:13:54.660 read: IOPS=2727, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1005msec) 00:13:54.660 slat (usec): min=6, max=13066, avg=151.21, stdev=783.93 00:13:54.660 clat (usec): min=3820, max=35162, avg=17570.19, stdev=3916.24 00:13:54.660 lat (usec): min=6280, max=35202, avg=17721.40, stdev=3989.93 00:13:54.660 clat percentiles (usec): 00:13:54.660 | 1.00th=[ 6980], 5.00th=[13435], 10.00th=[14353], 20.00th=[15008], 00:13:54.660 | 30.00th=[15401], 40.00th=[15664], 50.00th=[16057], 60.00th=[18220], 00:13:54.660 | 70.00th=[19268], 80.00th=[20317], 90.00th=[22152], 95.00th=[24773], 00:13:54.660 | 99.00th=[28967], 99.50th=[30540], 99.90th=[32113], 99.95th=[32113], 00:13:54.660 | 99.99th=[35390] 00:13:54.660 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:13:54.660 slat (usec): min=13, max=10987, avg=181.61, stdev=634.49 00:13:54.660 clat (usec): min=12693, max=38586, avg=25607.32, stdev=4739.52 00:13:54.660 lat (usec): min=12723, max=38628, avg=25788.93, stdev=4785.77 00:13:54.660 clat percentiles (usec): 00:13:54.661 | 1.00th=[14353], 5.00th=[18744], 10.00th=[19530], 20.00th=[22152], 00:13:54.661 | 30.00th=[23200], 40.00th=[23987], 50.00th=[24511], 60.00th=[25822], 00:13:54.661 | 70.00th=[28705], 80.00th=[30540], 90.00th=[32113], 95.00th=[33817], 00:13:54.661 | 99.00th=[35390], 99.50th=[37487], 99.90th=[38536], 99.95th=[38536], 00:13:54.661 | 99.99th=[38536] 00:13:54.661 bw ( KiB/s): min=12288, max=12288, per=18.60%, avg=12288.00, stdev= 0.00, samples=2 00:13:54.661 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:13:54.661 lat (msec) : 4=0.02%, 10=1.08%, 20=42.73%, 50=56.17% 00:13:54.661 cpu : usr=2.99%, sys=10.56%, ctx=422, majf=0, minf=9 00:13:54.661 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:13:54.661 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:54.661 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:54.661 issued rwts: total=2741,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:54.661 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:54.661 00:13:54.661 Run status group 0 (all jobs): 00:13:54.661 READ: bw=60.2MiB/s (63.1MB/s), 9145KiB/s-22.0MiB/s (9365kB/s-23.0MB/s), io=60.6MiB (63.5MB), run=1001-1006msec 00:13:54.661 WRITE: bw=64.5MiB/s (67.7MB/s), 9.94MiB/s-22.9MiB/s (10.4MB/s-24.0MB/s), io=64.9MiB (68.1MB), run=1001-1006msec 00:13:54.661 00:13:54.661 Disk stats (read/write): 00:13:54.661 nvme0n1: ios=4799/5120, merge=0/0, ticks=12235/11529, in_queue=23764, util=88.48% 00:13:54.661 nvme0n2: ios=2094/2087, merge=0/0, ticks=13949/20074, in_queue=34023, util=88.88% 00:13:54.661 nvme0n3: ios=4096/4447, merge=0/0, ticks=15936/16752, in_queue=32688, util=88.95% 00:13:54.661 nvme0n4: ios=2425/2560, merge=0/0, ticks=20830/30735, in_queue=51565, util=89.81% 00:13:54.661 17:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:54.661 17:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=71849 00:13:54.661 17:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:54.661 17:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:54.661 [global] 00:13:54.661 thread=1 00:13:54.661 invalidate=1 00:13:54.661 rw=read 00:13:54.661 time_based=1 00:13:54.661 runtime=10 00:13:54.661 ioengine=libaio 00:13:54.661 direct=1 00:13:54.661 bs=4096 00:13:54.661 iodepth=1 00:13:54.661 norandommap=1 00:13:54.661 numjobs=1 00:13:54.661 00:13:54.661 [job0] 00:13:54.661 filename=/dev/nvme0n1 00:13:54.661 [job1] 00:13:54.661 filename=/dev/nvme0n2 00:13:54.661 [job2] 00:13:54.661 filename=/dev/nvme0n3 00:13:54.661 [job3] 00:13:54.661 filename=/dev/nvme0n4 00:13:54.661 Could not set queue depth (nvme0n1) 00:13:54.661 Could not set queue depth (nvme0n2) 00:13:54.661 Could not set queue depth (nvme0n3) 00:13:54.661 Could not set queue depth (nvme0n4) 00:13:54.661 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.661 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.661 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.661 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:54.661 fio-3.35 00:13:54.661 Starting 4 threads 00:13:57.949 17:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:57.949 fio: pid=71893, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:57.949 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=38178816, buflen=4096 00:13:57.949 17:14:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:57.949 fio: pid=71892, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:57.949 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42393600, buflen=4096 00:13:57.949 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:57.949 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:58.207 fio: pid=71890, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:58.207 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8015872, buflen=4096 00:13:58.207 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:58.207 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:58.465 fio: pid=71891, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:58.465 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16076800, buflen=4096 00:13:58.465 00:13:58.465 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71890: Wed Jul 24 17:14:44 2024 00:13:58.465 read: IOPS=5234, BW=20.4MiB/s (21.4MB/s)(71.6MiB/3504msec) 00:13:58.465 slat (usec): min=11, max=11799, avg=17.00, stdev=129.65 00:13:58.465 clat (usec): min=38, max=6492, avg=172.72, stdev=73.46 00:13:58.465 lat (usec): min=139, max=12037, avg=189.72, stdev=149.86 00:13:58.465 clat percentiles (usec): 00:13:58.465 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 155], 00:13:58.465 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:13:58.465 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 200], 95.00th=[ 239], 00:13:58.465 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 693], 99.95th=[ 1221], 00:13:58.465 | 99.99th=[ 3097] 00:13:58.465 bw ( KiB/s): min=21672, max=22312, per=35.45%, avg=21988.00, stdev=306.21, samples=6 00:13:58.465 iops : min= 5418, max= 5578, avg=5497.00, stdev=76.55, samples=6 00:13:58.465 lat (usec) : 50=0.01%, 250=97.25%, 500=2.60%, 750=0.05%, 1000=0.02% 00:13:58.465 lat (msec) : 2=0.03%, 4=0.03%, 10=0.01% 00:13:58.465 cpu : usr=1.54%, sys=6.45%, ctx=18358, majf=0, minf=1 00:13:58.465 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.465 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.465 issued rwts: total=18342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.465 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.465 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71891: Wed Jul 24 17:14:44 2024 00:13:58.465 read: IOPS=5400, BW=21.1MiB/s (22.1MB/s)(79.3MiB/3761msec) 00:13:58.465 slat (usec): min=10, max=11863, avg=18.66, stdev=174.20 00:13:58.465 clat (usec): min=125, max=2714, avg=165.19, stdev=52.36 00:13:58.465 lat (usec): min=139, max=12036, avg=183.85, stdev=182.88 00:13:58.466 clat percentiles (usec): 00:13:58.466 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 151], 00:13:58.466 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 161], 00:13:58.466 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 178], 95.00th=[ 235], 00:13:58.466 | 99.00th=[ 262], 99.50th=[ 277], 99.90th=[ 586], 99.95th=[ 1172], 00:13:58.466 | 99.99th=[ 2606] 00:13:58.466 bw ( KiB/s): min=15947, max=22768, per=34.92%, avg=21659.29, stdev=2536.41, samples=7 00:13:58.466 iops : min= 3986, max= 5692, avg=5414.71, stdev=634.38, samples=7 00:13:58.466 lat (usec) : 250=97.58%, 500=2.29%, 750=0.04%, 1000=0.01% 00:13:58.466 lat (msec) : 2=0.04%, 4=0.02% 00:13:58.466 cpu : usr=1.33%, sys=7.07%, ctx=20319, majf=0, minf=1 00:13:58.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.466 issued rwts: total=20310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.466 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71892: Wed Jul 24 17:14:44 2024 00:13:58.466 read: IOPS=3175, BW=12.4MiB/s (13.0MB/s)(40.4MiB/3260msec) 00:13:58.466 slat (usec): min=12, max=11705, avg=21.58, stdev=141.93 00:13:58.466 clat (usec): min=137, max=7796, avg=291.50, stdev=154.49 00:13:58.466 lat (usec): min=152, max=11987, avg=313.08, stdev=210.16 00:13:58.466 clat percentiles (usec): 00:13:58.466 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 174], 20.00th=[ 281], 00:13:58.466 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 306], 00:13:58.466 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 334], 00:13:58.466 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 2311], 99.95th=[ 3720], 00:13:58.466 | 99.99th=[ 7504] 00:13:58.466 bw ( KiB/s): min=11704, max=12392, per=19.61%, avg=12162.67, stdev=309.08, samples=6 00:13:58.466 iops : min= 2926, max= 3098, avg=3040.67, stdev=77.27, samples=6 00:13:58.466 lat (usec) : 250=12.04%, 500=87.70%, 750=0.07%, 1000=0.05% 00:13:58.466 lat (msec) : 2=0.03%, 4=0.07%, 10=0.04% 00:13:58.466 cpu : usr=1.07%, sys=5.09%, ctx=10356, majf=0, minf=1 00:13:58.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.466 issued rwts: total=10351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.466 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=71893: Wed Jul 24 17:14:44 2024 00:13:58.466 read: IOPS=3092, BW=12.1MiB/s (12.7MB/s)(36.4MiB/3014msec) 00:13:58.466 slat (nsec): min=11815, max=86481, avg=20565.07, stdev=3130.54 00:13:58.466 clat (usec): min=154, max=3758, avg=300.63, stdev=60.74 00:13:58.466 lat (usec): min=172, max=3776, avg=321.19, stdev=60.84 00:13:58.466 clat percentiles (usec): 00:13:58.466 | 1.00th=[ 169], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 289], 00:13:58.466 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:13:58.466 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 322], 95.00th=[ 334], 00:13:58.466 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 734], 99.95th=[ 1369], 00:13:58.466 | 99.99th=[ 3752] 00:13:58.466 bw ( KiB/s): min=12240, max=12736, per=19.98%, avg=12393.33, stdev=178.92, samples=6 00:13:58.466 iops : min= 3060, max= 3184, avg=3098.33, stdev=44.73, samples=6 00:13:58.466 lat (usec) : 250=2.91%, 500=96.80%, 750=0.18%, 1000=0.03% 00:13:58.466 lat (msec) : 2=0.02%, 4=0.04% 00:13:58.466 cpu : usr=1.19%, sys=5.48%, ctx=9323, majf=0, minf=1 00:13:58.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.466 issued rwts: total=9322,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.466 00:13:58.466 Run status group 0 (all jobs): 00:13:58.466 READ: bw=60.6MiB/s (63.5MB/s), 12.1MiB/s-21.1MiB/s (12.7MB/s-22.1MB/s), io=228MiB (239MB), run=3014-3761msec 00:13:58.466 00:13:58.466 Disk stats (read/write): 00:13:58.466 nvme0n1: ios=17805/0, merge=0/0, ticks=3104/0, in_queue=3104, util=95.39% 00:13:58.466 nvme0n2: ios=19412/0, merge=0/0, ticks=3297/0, in_queue=3297, util=95.29% 00:13:58.466 nvme0n3: ios=9568/0, merge=0/0, ticks=2925/0, in_queue=2925, util=95.90% 00:13:58.466 nvme0n4: ios=8876/0, merge=0/0, ticks=2683/0, in_queue=2683, util=96.76% 00:13:58.466 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:58.466 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:58.724 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:58.724 17:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:58.983 17:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:58.983 17:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:59.241 17:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:59.241 17:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:59.499 17:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:59.499 17:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 71849 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:00.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:00.091 nvmf hotplug test: fio failed as expected 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:00.091 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:00.353 rmmod nvme_tcp 00:14:00.353 rmmod nvme_fabrics 00:14:00.353 rmmod nvme_keyring 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 71351 ']' 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 71351 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 71351 ']' 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 71351 00:14:00.353 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71351 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.354 killing process with pid 71351 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71351' 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 71351 00:14:00.354 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 71351 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:00.611 00:14:00.611 real 0m19.672s 00:14:00.611 user 1m14.711s 00:14:00.611 sys 0m9.694s 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.611 ************************************ 00:14:00.611 END TEST nvmf_fio_target 00:14:00.611 ************************************ 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:00.611 17:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:00.612 17:14:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:00.612 ************************************ 00:14:00.612 START TEST nvmf_bdevio 00:14:00.612 ************************************ 00:14:00.612 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:14:00.870 * Looking for test storage... 00:14:00.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:00.871 Cannot find device "nvmf_tgt_br" 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.871 Cannot find device "nvmf_tgt_br2" 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:00.871 Cannot find device "nvmf_tgt_br" 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:00.871 Cannot find device "nvmf_tgt_br2" 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:14:00.871 17:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.871 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.871 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.872 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:01.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:14:01.131 00:14:01.131 --- 10.0.0.2 ping statistics --- 00:14:01.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.131 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:01.131 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:01.131 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:01.131 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:14:01.131 00:14:01.131 --- 10.0.0.3 ping statistics --- 00:14:01.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.132 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:01.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:14:01.132 00:14:01.132 --- 10.0.0.1 ping statistics --- 00:14:01.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.132 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=72225 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 72225 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 72225 ']' 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:01.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:01.132 17:14:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:01.132 [2024-07-24 17:14:47.310232] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:01.132 [2024-07-24 17:14:47.310358] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.391 [2024-07-24 17:14:47.449165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.391 [2024-07-24 17:14:47.554699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.391 [2024-07-24 17:14:47.554756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.391 [2024-07-24 17:14:47.554769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.391 [2024-07-24 17:14:47.554777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.391 [2024-07-24 17:14:47.554784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.391 [2024-07-24 17:14:47.554882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:01.391 [2024-07-24 17:14:47.555038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:01.391 [2024-07-24 17:14:47.555160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:01.391 [2024-07-24 17:14:47.555587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.326 [2024-07-24 17:14:48.304608] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.326 Malloc0 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:02.326 [2024-07-24 17:14:48.372381] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:02.326 { 00:14:02.326 "params": { 00:14:02.326 "name": "Nvme$subsystem", 00:14:02.326 "trtype": "$TEST_TRANSPORT", 00:14:02.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.326 "adrfam": "ipv4", 00:14:02.326 "trsvcid": "$NVMF_PORT", 00:14:02.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.326 "hdgst": ${hdgst:-false}, 00:14:02.326 "ddgst": ${ddgst:-false} 00:14:02.326 }, 00:14:02.326 "method": "bdev_nvme_attach_controller" 00:14:02.326 } 00:14:02.326 EOF 00:14:02.326 )") 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:02.326 17:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:02.326 "params": { 00:14:02.326 "name": "Nvme1", 00:14:02.326 "trtype": "tcp", 00:14:02.326 "traddr": "10.0.0.2", 00:14:02.326 "adrfam": "ipv4", 00:14:02.326 "trsvcid": "4420", 00:14:02.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.326 "hdgst": false, 00:14:02.327 "ddgst": false 00:14:02.327 }, 00:14:02.327 "method": "bdev_nvme_attach_controller" 00:14:02.327 }' 00:14:02.327 [2024-07-24 17:14:48.433056] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:02.327 [2024-07-24 17:14:48.433162] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72279 ] 00:14:02.584 [2024-07-24 17:14:48.573921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.584 [2024-07-24 17:14:48.694815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.584 [2024-07-24 17:14:48.694956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.584 [2024-07-24 17:14:48.694960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.842 I/O targets: 00:14:02.842 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:02.842 00:14:02.842 00:14:02.842 CUnit - A unit testing framework for C - Version 2.1-3 00:14:02.842 http://cunit.sourceforge.net/ 00:14:02.842 00:14:02.842 00:14:02.842 Suite: bdevio tests on: Nvme1n1 00:14:02.842 Test: blockdev write read block ...passed 00:14:02.842 Test: blockdev write zeroes read block ...passed 00:14:02.842 Test: blockdev write zeroes read no split ...passed 00:14:02.842 Test: blockdev write zeroes read split ...passed 00:14:02.842 Test: blockdev write zeroes read split partial ...passed 00:14:02.842 Test: blockdev reset ...[2024-07-24 17:14:48.984913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:02.842 [2024-07-24 17:14:48.985038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2292180 (9): Bad file descriptor 00:14:02.842 [2024-07-24 17:14:48.998661] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:02.842 passed 00:14:02.842 Test: blockdev write read 8 blocks ...passed 00:14:02.842 Test: blockdev write read size > 128k ...passed 00:14:02.842 Test: blockdev write read invalid size ...passed 00:14:02.842 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:02.842 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:02.842 Test: blockdev write read max offset ...passed 00:14:03.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:03.101 Test: blockdev writev readv 8 blocks ...passed 00:14:03.101 Test: blockdev writev readv 30 x 1block ...passed 00:14:03.101 Test: blockdev writev readv block ...passed 00:14:03.101 Test: blockdev writev readv size > 128k ...passed 00:14:03.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:03.101 Test: blockdev comparev and writev ...[2024-07-24 17:14:49.168360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.168403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.168424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.168446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.168796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.168823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.168841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.168852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.169349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.169381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.169400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.169411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.169778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.169808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.169827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.101 [2024-07-24 17:14:49.169838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:03.101 passed 00:14:03.101 Test: blockdev nvme passthru rw ...passed 00:14:03.101 Test: blockdev nvme passthru vendor specific ...passed 00:14:03.101 Test: blockdev nvme admin passthru ...[2024-07-24 17:14:49.252634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.101 [2024-07-24 17:14:49.252673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.252793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.101 [2024-07-24 17:14:49.252810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.252926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.101 [2024-07-24 17:14:49.252943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:03.101 [2024-07-24 17:14:49.253061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.101 [2024-07-24 17:14:49.253077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:03.101 passed 00:14:03.101 Test: blockdev copy ...passed 00:14:03.101 00:14:03.101 Run Summary: Type Total Ran Passed Failed Inactive 00:14:03.101 suites 1 1 n/a 0 0 00:14:03.101 tests 23 23 23 0 0 00:14:03.101 asserts 152 152 152 0 n/a 00:14:03.101 00:14:03.101 Elapsed time = 0.875 seconds 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.360 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.360 rmmod nvme_tcp 00:14:03.360 rmmod nvme_fabrics 00:14:03.360 rmmod nvme_keyring 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 72225 ']' 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 72225 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 72225 ']' 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 72225 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72225 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:03.619 killing process with pid 72225 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72225' 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 72225 00:14:03.619 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 72225 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:03.878 ************************************ 00:14:03.878 END TEST nvmf_bdevio 00:14:03.878 ************************************ 00:14:03.878 00:14:03.878 real 0m3.147s 00:14:03.878 user 0m11.368s 00:14:03.878 sys 0m0.764s 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:03.878 00:14:03.878 real 3m33.223s 00:14:03.878 user 11m15.943s 00:14:03.878 sys 1m0.929s 00:14:03.878 ************************************ 00:14:03.878 END TEST nvmf_target_core 00:14:03.878 ************************************ 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:03.878 17:14:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 17:14:50 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:03.878 17:14:50 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:03.878 17:14:50 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:03.878 17:14:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:03.878 ************************************ 00:14:03.878 START TEST nvmf_target_extra 00:14:03.878 ************************************ 00:14:03.878 17:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:14:03.878 * Looking for test storage... 00:14:04.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:04.137 17:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.138 ************************************ 00:14:04.138 START TEST nvmf_example 00:14:04.138 ************************************ 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:14:04.138 * Looking for test storage... 00:14:04.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:04.138 Cannot find device "nvmf_tgt_br" 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # true 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.138 Cannot find device "nvmf_tgt_br2" 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # true 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:04.138 Cannot find device "nvmf_tgt_br" 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # true 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:04.138 Cannot find device "nvmf_tgt_br2" 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # true 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:04.138 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:04.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:04.397 00:14:04.397 --- 10.0.0.2 ping statistics --- 00:14:04.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.397 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:04.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:04.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:04.397 00:14:04.397 --- 10.0.0.3 ping statistics --- 00:14:04.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.397 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:04.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:14:04.397 00:14:04.397 --- 10.0.0.1 ping statistics --- 00:14:04.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.397 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=72506 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 72506 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 72506 ']' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.397 17:14:50 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:05.771 17:14:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:17.971 Initializing NVMe Controllers 00:14:17.972 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:17.972 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:17.972 Initialization complete. Launching workers. 00:14:17.972 ======================================================== 00:14:17.972 Latency(us) 00:14:17.972 Device Information : IOPS MiB/s Average min max 00:14:17.972 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14437.97 56.40 4432.32 667.33 23112.03 00:14:17.972 ======================================================== 00:14:17.972 Total : 14437.97 56.40 4432.32 667.33 23112.03 00:14:17.972 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:17.972 rmmod nvme_tcp 00:14:17.972 rmmod nvme_fabrics 00:14:17.972 rmmod nvme_keyring 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 72506 ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 72506 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 72506 ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 72506 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72506 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:14:17.972 killing process with pid 72506 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72506' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 72506 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 72506 00:14:17.972 nvmf threads initialize successfully 00:14:17.972 bdev subsystem init successfully 00:14:17.972 created a nvmf target service 00:14:17.972 create targets's poll groups done 00:14:17.972 all subsystems of target started 00:14:17.972 nvmf target is running 00:14:17.972 all subsystems of target stopped 00:14:17.972 destroy targets's poll groups done 00:14:17.972 destroyed the nvmf target service 00:14:17.972 bdev subsystem finish successfully 00:14:17.972 nvmf threads destroy successfully 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.972 00:14:17.972 real 0m12.273s 00:14:17.972 user 0m44.275s 00:14:17.972 sys 0m2.106s 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:14:17.972 ************************************ 00:14:17.972 END TEST nvmf_example 00:14:17.972 ************************************ 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.972 ************************************ 00:14:17.972 START TEST nvmf_filesystem 00:14:17.972 ************************************ 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:14:17.972 * Looking for test storage... 00:14:17.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:14:17.972 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:17.973 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:17.973 #define SPDK_CONFIG_H 00:14:17.973 #define SPDK_CONFIG_APPS 1 00:14:17.973 #define SPDK_CONFIG_ARCH native 00:14:17.973 #undef SPDK_CONFIG_ASAN 00:14:17.973 #define SPDK_CONFIG_AVAHI 1 00:14:17.973 #undef SPDK_CONFIG_CET 00:14:17.973 #define SPDK_CONFIG_COVERAGE 1 00:14:17.973 #define SPDK_CONFIG_CROSS_PREFIX 00:14:17.973 #undef SPDK_CONFIG_CRYPTO 00:14:17.973 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:17.973 #undef SPDK_CONFIG_CUSTOMOCF 00:14:17.973 #undef SPDK_CONFIG_DAOS 00:14:17.973 #define SPDK_CONFIG_DAOS_DIR 00:14:17.973 #define SPDK_CONFIG_DEBUG 1 00:14:17.973 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:17.973 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:17.973 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:17.973 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:17.973 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:17.973 #undef SPDK_CONFIG_DPDK_UADK 00:14:17.973 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:17.973 #define SPDK_CONFIG_EXAMPLES 1 00:14:17.973 #undef SPDK_CONFIG_FC 00:14:17.973 #define SPDK_CONFIG_FC_PATH 00:14:17.973 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:17.973 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:17.973 #undef SPDK_CONFIG_FUSE 00:14:17.973 #undef SPDK_CONFIG_FUZZER 00:14:17.973 #define SPDK_CONFIG_FUZZER_LIB 00:14:17.973 #define SPDK_CONFIG_GOLANG 1 00:14:17.973 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:17.973 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:17.973 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:17.973 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:17.973 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:17.973 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:17.973 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:17.973 #define SPDK_CONFIG_IDXD 1 00:14:17.973 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:17.973 #undef SPDK_CONFIG_IPSEC_MB 00:14:17.973 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:17.973 #define SPDK_CONFIG_ISAL 1 00:14:17.973 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:17.973 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:17.973 #define SPDK_CONFIG_LIBDIR 00:14:17.973 #undef SPDK_CONFIG_LTO 00:14:17.973 #define SPDK_CONFIG_MAX_LCORES 128 00:14:17.973 #define SPDK_CONFIG_NVME_CUSE 1 00:14:17.973 #undef SPDK_CONFIG_OCF 00:14:17.973 #define SPDK_CONFIG_OCF_PATH 00:14:17.973 #define SPDK_CONFIG_OPENSSL_PATH 00:14:17.973 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:17.973 #define SPDK_CONFIG_PGO_DIR 00:14:17.973 #undef SPDK_CONFIG_PGO_USE 00:14:17.973 #define SPDK_CONFIG_PREFIX /usr/local 00:14:17.973 #undef SPDK_CONFIG_RAID5F 00:14:17.973 #undef SPDK_CONFIG_RBD 00:14:17.973 #define SPDK_CONFIG_RDMA 1 00:14:17.973 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:17.973 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:17.973 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:17.973 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:17.973 #define SPDK_CONFIG_SHARED 1 00:14:17.973 #undef SPDK_CONFIG_SMA 00:14:17.973 #define SPDK_CONFIG_TESTS 1 00:14:17.973 #undef SPDK_CONFIG_TSAN 00:14:17.973 #define SPDK_CONFIG_UBLK 1 00:14:17.973 #define SPDK_CONFIG_UBSAN 1 00:14:17.973 #undef SPDK_CONFIG_UNIT_TESTS 00:14:17.973 #undef SPDK_CONFIG_URING 00:14:17.973 #define SPDK_CONFIG_URING_PATH 00:14:17.973 #undef SPDK_CONFIG_URING_ZNS 00:14:17.973 #define SPDK_CONFIG_USDT 1 00:14:17.973 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:17.973 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:17.973 #undef SPDK_CONFIG_VFIO_USER 00:14:17.973 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:17.973 #define SPDK_CONFIG_VHOST 1 00:14:17.974 #define SPDK_CONFIG_VIRTIO 1 00:14:17.974 #undef SPDK_CONFIG_VTUNE 00:14:17.974 #define SPDK_CONFIG_VTUNE_DIR 00:14:17.974 #define SPDK_CONFIG_WERROR 1 00:14:17.974 #define SPDK_CONFIG_WPDK_DIR 00:14:17.974 #undef SPDK_CONFIG_XNVME 00:14:17.974 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:14:17.974 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:14:17.975 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j10 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 72746 ]] 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 72746 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.6xKh6n 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.6xKh6n/tests/target /tmp/spdk.6xKh6n 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=devtmpfs 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4194304 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4194304 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6257971200 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267891712 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=2487009280 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=2507157504 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=20148224 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.976 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13785706496 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5244407808 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda5 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=btrfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=13785706496 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=20314062848 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=5244407808 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6267760640 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6267895808 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=135168 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda2 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext4 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=843546624 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1012768768 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=100016128 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/vda3 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=vfat 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=92499968 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=104607744 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=12107776 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1253572608 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=1253576704 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=fuse.sshfs 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=94911455232 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=105088212992 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4791324672 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:14:17.977 * Looking for test storage... 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/home 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=13785706496 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == tmpfs ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ btrfs == ramfs ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ /home == / ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.977 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:17.978 Cannot find device "nvmf_tgt_br" 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.978 Cannot find device "nvmf_tgt_br2" 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:17.978 Cannot find device "nvmf_tgt_br" 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:17.978 Cannot find device "nvmf_tgt_br2" 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.978 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:17.978 17:15:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.978 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.978 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.978 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.978 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:17.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:17.978 00:14:17.978 --- 10.0.0.2 ping statistics --- 00:14:17.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.978 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:17.978 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:17.978 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.978 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:17.978 00:14:17.978 --- 10.0.0.3 ping statistics --- 00:14:17.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.978 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:17.978 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:17.979 00:14:17.979 --- 10.0.0.1 ping statistics --- 00:14:17.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.979 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:17.979 ************************************ 00:14:17.979 START TEST nvmf_filesystem_no_in_capsule 00:14:17.979 ************************************ 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=72897 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 72897 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 72897 ']' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.979 17:15:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.979 [2024-07-24 17:15:03.146802] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:17.979 [2024-07-24 17:15:03.146931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.979 [2024-07-24 17:15:03.288026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.979 [2024-07-24 17:15:03.406425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.979 [2024-07-24 17:15:03.406683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.979 [2024-07-24 17:15:03.406760] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.979 [2024-07-24 17:15:03.406842] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.979 [2024-07-24 17:15:03.406899] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.979 [2024-07-24 17:15:03.407073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.979 [2024-07-24 17:15:03.407247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.979 [2024-07-24 17:15:03.407918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.979 [2024-07-24 17:15:03.407957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:17.979 [2024-07-24 17:15:04.152975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.979 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.238 Malloc1 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.238 [2024-07-24 17:15:04.344848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.238 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:18.238 { 00:14:18.238 "aliases": [ 00:14:18.238 "09ddd3ae-3415-41ba-92df-5796f5576cd2" 00:14:18.238 ], 00:14:18.238 "assigned_rate_limits": { 00:14:18.238 "r_mbytes_per_sec": 0, 00:14:18.238 "rw_ios_per_sec": 0, 00:14:18.238 "rw_mbytes_per_sec": 0, 00:14:18.238 "w_mbytes_per_sec": 0 00:14:18.238 }, 00:14:18.238 "block_size": 512, 00:14:18.238 "claim_type": "exclusive_write", 00:14:18.238 "claimed": true, 00:14:18.238 "driver_specific": {}, 00:14:18.238 "memory_domains": [ 00:14:18.238 { 00:14:18.238 "dma_device_id": "system", 00:14:18.238 "dma_device_type": 1 00:14:18.238 }, 00:14:18.238 { 00:14:18.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:18.239 "dma_device_type": 2 00:14:18.239 } 00:14:18.239 ], 00:14:18.239 "name": "Malloc1", 00:14:18.239 "num_blocks": 1048576, 00:14:18.239 "product_name": "Malloc disk", 00:14:18.239 "supported_io_types": { 00:14:18.239 "abort": true, 00:14:18.239 "compare": false, 00:14:18.239 "compare_and_write": false, 00:14:18.239 "copy": true, 00:14:18.239 "flush": true, 00:14:18.239 "get_zone_info": false, 00:14:18.239 "nvme_admin": false, 00:14:18.239 "nvme_io": false, 00:14:18.239 "nvme_io_md": false, 00:14:18.239 "nvme_iov_md": false, 00:14:18.239 "read": true, 00:14:18.239 "reset": true, 00:14:18.239 "seek_data": false, 00:14:18.239 "seek_hole": false, 00:14:18.239 "unmap": true, 00:14:18.239 "write": true, 00:14:18.239 "write_zeroes": true, 00:14:18.239 "zcopy": true, 00:14:18.239 "zone_append": false, 00:14:18.239 "zone_management": false 00:14:18.239 }, 00:14:18.239 "uuid": "09ddd3ae-3415-41ba-92df-5796f5576cd2", 00:14:18.239 "zoned": false 00:14:18.239 } 00:14:18.239 ]' 00:14:18.239 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:18.239 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:18.239 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:18.498 17:15:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:21.032 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:21.033 17:15:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:21.968 ************************************ 00:14:21.968 START TEST filesystem_ext4 00:14:21.968 ************************************ 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:21.968 mke2fs 1.46.5 (30-Dec-2021) 00:14:21.968 Discarding device blocks: 0/522240 done 00:14:21.968 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:21.968 Filesystem UUID: 3e902c00-3114-4dff-9079-791c8a9c3c70 00:14:21.968 Superblock backups stored on blocks: 00:14:21.968 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:21.968 00:14:21.968 Allocating group tables: 0/64 done 00:14:21.968 Writing inode tables: 0/64 done 00:14:21.968 Creating journal (8192 blocks): done 00:14:21.968 Writing superblocks and filesystem accounting information: 0/64 done 00:14:21.968 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:21.968 17:15:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 72897 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:21.968 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:21.968 00:14:21.968 real 0m0.314s 00:14:21.968 user 0m0.018s 00:14:21.968 sys 0m0.060s 00:14:21.969 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:21.969 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:21.969 ************************************ 00:14:21.969 END TEST filesystem_ext4 00:14:21.969 ************************************ 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.228 ************************************ 00:14:22.228 START TEST filesystem_btrfs 00:14:22.228 ************************************ 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:22.228 btrfs-progs v6.6.2 00:14:22.228 See https://btrfs.readthedocs.io for more information. 00:14:22.228 00:14:22.228 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:22.228 NOTE: several default settings have changed in version 5.15, please make sure 00:14:22.228 this does not affect your deployments: 00:14:22.228 - DUP for metadata (-m dup) 00:14:22.228 - enabled no-holes (-O no-holes) 00:14:22.228 - enabled free-space-tree (-R free-space-tree) 00:14:22.228 00:14:22.228 Label: (null) 00:14:22.228 UUID: 4a7f7a4b-cf08-4d4a-8d2c-01a6f5db7280 00:14:22.228 Node size: 16384 00:14:22.228 Sector size: 4096 00:14:22.228 Filesystem size: 510.00MiB 00:14:22.228 Block group profiles: 00:14:22.228 Data: single 8.00MiB 00:14:22.228 Metadata: DUP 32.00MiB 00:14:22.228 System: DUP 8.00MiB 00:14:22.228 SSD detected: yes 00:14:22.228 Zoned device: no 00:14:22.228 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:22.228 Runtime features: free-space-tree 00:14:22.228 Checksum: crc32c 00:14:22.228 Number of devices: 1 00:14:22.228 Devices: 00:14:22.228 ID SIZE PATH 00:14:22.228 1 510.00MiB /dev/nvme0n1p1 00:14:22.228 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 72897 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:22.228 00:14:22.228 real 0m0.221s 00:14:22.228 user 0m0.023s 00:14:22.228 sys 0m0.059s 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:22.228 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:22.228 ************************************ 00:14:22.228 END TEST filesystem_btrfs 00:14:22.228 ************************************ 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:22.489 ************************************ 00:14:22.489 START TEST filesystem_xfs 00:14:22.489 ************************************ 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:22.489 17:15:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:22.489 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:22.489 = sectsz=512 attr=2, projid32bit=1 00:14:22.489 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:22.489 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:22.489 data = bsize=4096 blocks=130560, imaxpct=25 00:14:22.489 = sunit=0 swidth=0 blks 00:14:22.489 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:22.489 log =internal log bsize=4096 blocks=16384, version=2 00:14:22.489 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:22.489 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:23.064 Discarding blocks...Done. 00:14:23.064 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:23.064 17:15:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:25.595 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:25.595 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:14:25.595 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 72897 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:25.596 00:14:25.596 real 0m3.102s 00:14:25.596 user 0m0.024s 00:14:25.596 sys 0m0.053s 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:25.596 ************************************ 00:14:25.596 END TEST filesystem_xfs 00:14:25.596 ************************************ 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 72897 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 72897 ']' 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 72897 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72897 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.596 killing process with pid 72897 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72897' 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 72897 00:14:25.596 17:15:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 72897 00:14:26.224 ************************************ 00:14:26.224 END TEST nvmf_filesystem_no_in_capsule 00:14:26.225 ************************************ 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:26.225 00:14:26.225 real 0m9.110s 00:14:26.225 user 0m34.221s 00:14:26.225 sys 0m1.643s 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:26.225 ************************************ 00:14:26.225 START TEST nvmf_filesystem_in_capsule 00:14:26.225 ************************************ 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=73208 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 73208 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 73208 ']' 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:26.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:26.225 17:15:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:26.225 [2024-07-24 17:15:12.298361] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:26.225 [2024-07-24 17:15:12.298452] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.225 [2024-07-24 17:15:12.435860] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.483 [2024-07-24 17:15:12.561187] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.483 [2024-07-24 17:15:12.561237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.483 [2024-07-24 17:15:12.561249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.483 [2024-07-24 17:15:12.561257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.483 [2024-07-24 17:15:12.561265] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.483 [2024-07-24 17:15:12.561359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.483 [2024-07-24 17:15:12.561491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.483 [2024-07-24 17:15:12.561913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.483 [2024-07-24 17:15:12.561952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.048 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:27.048 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 [2024-07-24 17:15:13.323315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 Malloc1 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.307 [2024-07-24 17:15:13.515732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:14:27.307 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:14:27.308 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.308 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:14:27.566 { 00:14:27.566 "aliases": [ 00:14:27.566 "3200bd86-429d-44b8-a377-d0b64775b127" 00:14:27.566 ], 00:14:27.566 "assigned_rate_limits": { 00:14:27.566 "r_mbytes_per_sec": 0, 00:14:27.566 "rw_ios_per_sec": 0, 00:14:27.566 "rw_mbytes_per_sec": 0, 00:14:27.566 "w_mbytes_per_sec": 0 00:14:27.566 }, 00:14:27.566 "block_size": 512, 00:14:27.566 "claim_type": "exclusive_write", 00:14:27.566 "claimed": true, 00:14:27.566 "driver_specific": {}, 00:14:27.566 "memory_domains": [ 00:14:27.566 { 00:14:27.566 "dma_device_id": "system", 00:14:27.566 "dma_device_type": 1 00:14:27.566 }, 00:14:27.566 { 00:14:27.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:27.566 "dma_device_type": 2 00:14:27.566 } 00:14:27.566 ], 00:14:27.566 "name": "Malloc1", 00:14:27.566 "num_blocks": 1048576, 00:14:27.566 "product_name": "Malloc disk", 00:14:27.566 "supported_io_types": { 00:14:27.566 "abort": true, 00:14:27.566 "compare": false, 00:14:27.566 "compare_and_write": false, 00:14:27.566 "copy": true, 00:14:27.566 "flush": true, 00:14:27.566 "get_zone_info": false, 00:14:27.566 "nvme_admin": false, 00:14:27.566 "nvme_io": false, 00:14:27.566 "nvme_io_md": false, 00:14:27.566 "nvme_iov_md": false, 00:14:27.566 "read": true, 00:14:27.566 "reset": true, 00:14:27.566 "seek_data": false, 00:14:27.566 "seek_hole": false, 00:14:27.566 "unmap": true, 00:14:27.566 "write": true, 00:14:27.566 "write_zeroes": true, 00:14:27.566 "zcopy": true, 00:14:27.566 "zone_append": false, 00:14:27.566 "zone_management": false 00:14:27.566 }, 00:14:27.566 "uuid": "3200bd86-429d-44b8-a377-d0b64775b127", 00:14:27.566 "zoned": false 00:14:27.566 } 00:14:27.566 ]' 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:14:27.566 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.825 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.825 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.825 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.825 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.825 17:15:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:14:29.727 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:14:29.985 17:15:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:30.933 ************************************ 00:14:30.933 START TEST filesystem_in_capsule_ext4 00:14:30.933 ************************************ 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:14:30.933 17:15:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:14:30.933 mke2fs 1.46.5 (30-Dec-2021) 00:14:30.933 Discarding device blocks: 0/522240 done 00:14:30.933 Creating filesystem with 522240 1k blocks and 130560 inodes 00:14:30.933 Filesystem UUID: 12b4e741-681a-4f14-9559-747953f31897 00:14:30.933 Superblock backups stored on blocks: 00:14:30.933 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:14:30.933 00:14:30.933 Allocating group tables: 0/64 done 00:14:30.933 Writing inode tables: 0/64 done 00:14:30.933 Creating journal (8192 blocks): done 00:14:30.933 Writing superblocks and filesystem accounting information: 0/64 done 00:14:30.933 00:14:30.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:14:30.933 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 73208 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:31.192 ************************************ 00:14:31.192 END TEST filesystem_in_capsule_ext4 00:14:31.192 ************************************ 00:14:31.192 00:14:31.192 real 0m0.394s 00:14:31.192 user 0m0.019s 00:14:31.192 sys 0m0.061s 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:31.192 ************************************ 00:14:31.192 START TEST filesystem_in_capsule_btrfs 00:14:31.192 ************************************ 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:14:31.192 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:14:31.451 btrfs-progs v6.6.2 00:14:31.451 See https://btrfs.readthedocs.io for more information. 00:14:31.451 00:14:31.451 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:14:31.451 NOTE: several default settings have changed in version 5.15, please make sure 00:14:31.451 this does not affect your deployments: 00:14:31.451 - DUP for metadata (-m dup) 00:14:31.451 - enabled no-holes (-O no-holes) 00:14:31.451 - enabled free-space-tree (-R free-space-tree) 00:14:31.451 00:14:31.451 Label: (null) 00:14:31.451 UUID: af204a7a-ca69-414d-8ae8-af353cfe23e9 00:14:31.451 Node size: 16384 00:14:31.451 Sector size: 4096 00:14:31.451 Filesystem size: 510.00MiB 00:14:31.451 Block group profiles: 00:14:31.451 Data: single 8.00MiB 00:14:31.451 Metadata: DUP 32.00MiB 00:14:31.451 System: DUP 8.00MiB 00:14:31.451 SSD detected: yes 00:14:31.451 Zoned device: no 00:14:31.451 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:14:31.451 Runtime features: free-space-tree 00:14:31.451 Checksum: crc32c 00:14:31.451 Number of devices: 1 00:14:31.451 Devices: 00:14:31.451 ID SIZE PATH 00:14:31.451 1 510.00MiB /dev/nvme0n1p1 00:14:31.451 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 73208 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:31.451 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:31.451 00:14:31.451 real 0m0.220s 00:14:31.451 user 0m0.025s 00:14:31.452 sys 0m0.054s 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:14:31.452 ************************************ 00:14:31.452 END TEST filesystem_in_capsule_btrfs 00:14:31.452 ************************************ 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:31.452 ************************************ 00:14:31.452 START TEST filesystem_in_capsule_xfs 00:14:31.452 ************************************ 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:14:31.452 17:15:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:14:31.710 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:14:31.710 = sectsz=512 attr=2, projid32bit=1 00:14:31.710 = crc=1 finobt=1, sparse=1, rmapbt=0 00:14:31.710 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:14:31.710 data = bsize=4096 blocks=130560, imaxpct=25 00:14:31.710 = sunit=0 swidth=0 blks 00:14:31.710 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:14:31.710 log =internal log bsize=4096 blocks=16384, version=2 00:14:31.710 = sectsz=512 sunit=0 blks, lazy-count=1 00:14:31.710 realtime =none extsz=4096 blocks=0, rtextents=0 00:14:32.276 Discarding blocks...Done. 00:14:32.276 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:14:32.277 17:15:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:34.188 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:34.188 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:34.188 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:34.188 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:34.188 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 73208 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:34.189 ************************************ 00:14:34.189 END TEST filesystem_in_capsule_xfs 00:14:34.189 ************************************ 00:14:34.189 00:14:34.189 real 0m2.620s 00:14:34.189 user 0m0.027s 00:14:34.189 sys 0m0.048s 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:34.189 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:34.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 73208 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 73208 ']' 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 73208 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73208 00:14:34.447 killing process with pid 73208 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73208' 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 73208 00:14:34.447 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 73208 00:14:34.706 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:34.706 00:14:34.706 real 0m8.673s 00:14:34.706 user 0m32.618s 00:14:34.706 sys 0m1.601s 00:14:34.706 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.706 ************************************ 00:14:34.706 END TEST nvmf_filesystem_in_capsule 00:14:34.706 ************************************ 00:14:34.706 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:34.965 17:15:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:34.965 rmmod nvme_tcp 00:14:34.965 rmmod nvme_fabrics 00:14:34.965 rmmod nvme_keyring 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:34.965 00:14:34.965 real 0m18.620s 00:14:34.965 user 1m7.078s 00:14:34.965 sys 0m3.650s 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:34.965 ************************************ 00:14:34.965 END TEST nvmf_filesystem 00:14:34.965 ************************************ 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:34.965 ************************************ 00:14:34.965 START TEST nvmf_target_discovery 00:14:34.965 ************************************ 00:14:34.965 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:35.224 * Looking for test storage... 00:14:35.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.224 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:35.225 Cannot find device "nvmf_tgt_br" 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:35.225 Cannot find device "nvmf_tgt_br2" 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:35.225 Cannot find device "nvmf_tgt_br" 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:35.225 Cannot find device "nvmf_tgt_br2" 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:35.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:35.225 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:35.225 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.483 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:35.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:14:35.484 00:14:35.484 --- 10.0.0.2 ping statistics --- 00:14:35.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.484 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:35.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:14:35.484 00:14:35.484 --- 10.0.0.3 ping statistics --- 00:14:35.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.484 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:35.484 00:14:35.484 --- 10.0.0.1 ping statistics --- 00:14:35.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.484 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:35.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=73658 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 73658 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 73658 ']' 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.484 17:15:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:35.484 [2024-07-24 17:15:21.659491] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:35.484 [2024-07-24 17:15:21.659603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.742 [2024-07-24 17:15:21.800786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.742 [2024-07-24 17:15:21.909393] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.742 [2024-07-24 17:15:21.909733] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.742 [2024-07-24 17:15:21.909887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.742 [2024-07-24 17:15:21.910013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.742 [2024-07-24 17:15:21.910047] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.742 [2024-07-24 17:15:21.910263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.742 [2024-07-24 17:15:21.910618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.742 [2024-07-24 17:15:21.910899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.742 [2024-07-24 17:15:21.910903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 [2024-07-24 17:15:22.709916] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 Null1 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 [2024-07-24 17:15:22.777335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 Null2 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.680 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.680 Null3 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 Null4 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.681 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 4420 00:14:36.942 00:14:36.942 Discovery Log Number of Records 6, Generation counter 6 00:14:36.942 =====Discovery Log Entry 0====== 00:14:36.942 trtype: tcp 00:14:36.942 adrfam: ipv4 00:14:36.942 subtype: current discovery subsystem 00:14:36.942 treq: not required 00:14:36.942 portid: 0 00:14:36.942 trsvcid: 4420 00:14:36.942 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:36.942 traddr: 10.0.0.2 00:14:36.942 eflags: explicit discovery connections, duplicate discovery information 00:14:36.942 sectype: none 00:14:36.942 =====Discovery Log Entry 1====== 00:14:36.942 trtype: tcp 00:14:36.942 adrfam: ipv4 00:14:36.942 subtype: nvme subsystem 00:14:36.942 treq: not required 00:14:36.942 portid: 0 00:14:36.942 trsvcid: 4420 00:14:36.942 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:36.942 traddr: 10.0.0.2 00:14:36.942 eflags: none 00:14:36.942 sectype: none 00:14:36.942 =====Discovery Log Entry 2====== 00:14:36.942 trtype: tcp 00:14:36.942 adrfam: ipv4 00:14:36.942 subtype: nvme subsystem 00:14:36.942 treq: not required 00:14:36.942 portid: 0 00:14:36.942 trsvcid: 4420 00:14:36.942 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:36.942 traddr: 10.0.0.2 00:14:36.942 eflags: none 00:14:36.942 sectype: none 00:14:36.942 =====Discovery Log Entry 3====== 00:14:36.942 trtype: tcp 00:14:36.942 adrfam: ipv4 00:14:36.942 subtype: nvme subsystem 00:14:36.942 treq: not required 00:14:36.942 portid: 0 00:14:36.942 trsvcid: 4420 00:14:36.942 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:36.942 traddr: 10.0.0.2 00:14:36.942 eflags: none 00:14:36.942 sectype: none 00:14:36.942 =====Discovery Log Entry 4====== 00:14:36.942 trtype: tcp 00:14:36.942 adrfam: ipv4 00:14:36.942 subtype: nvme subsystem 00:14:36.942 treq: not required 00:14:36.942 portid: 0 00:14:36.942 trsvcid: 4420 00:14:36.942 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:36.942 traddr: 10.0.0.2 00:14:36.942 eflags: none 00:14:36.942 sectype: none 00:14:36.942 =====Discovery Log Entry 5====== 00:14:36.942 trtype: tcp 00:14:36.942 adrfam: ipv4 00:14:36.942 subtype: discovery subsystem referral 00:14:36.942 treq: not required 00:14:36.942 portid: 0 00:14:36.942 trsvcid: 4430 00:14:36.942 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:36.942 traddr: 10.0.0.2 00:14:36.942 eflags: none 00:14:36.942 sectype: none 00:14:36.942 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:36.942 Perform nvmf subsystem discovery via RPC 00:14:36.942 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:36.942 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.942 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.942 [ 00:14:36.942 { 00:14:36.942 "allow_any_host": true, 00:14:36.942 "hosts": [], 00:14:36.942 "listen_addresses": [ 00:14:36.942 { 00:14:36.942 "adrfam": "IPv4", 00:14:36.942 "traddr": "10.0.0.2", 00:14:36.942 "trsvcid": "4420", 00:14:36.942 "trtype": "TCP" 00:14:36.942 } 00:14:36.942 ], 00:14:36.942 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:36.942 "subtype": "Discovery" 00:14:36.942 }, 00:14:36.942 { 00:14:36.942 "allow_any_host": true, 00:14:36.942 "hosts": [], 00:14:36.942 "listen_addresses": [ 00:14:36.942 { 00:14:36.942 "adrfam": "IPv4", 00:14:36.942 "traddr": "10.0.0.2", 00:14:36.942 "trsvcid": "4420", 00:14:36.942 "trtype": "TCP" 00:14:36.942 } 00:14:36.942 ], 00:14:36.942 "max_cntlid": 65519, 00:14:36.942 "max_namespaces": 32, 00:14:36.942 "min_cntlid": 1, 00:14:36.942 "model_number": "SPDK bdev Controller", 00:14:36.942 "namespaces": [ 00:14:36.942 { 00:14:36.942 "bdev_name": "Null1", 00:14:36.942 "name": "Null1", 00:14:36.942 "nguid": "A91D1F28F44645FF8DE50EEB21FAC17B", 00:14:36.942 "nsid": 1, 00:14:36.942 "uuid": "a91d1f28-f446-45ff-8de5-0eeb21fac17b" 00:14:36.942 } 00:14:36.942 ], 00:14:36.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.942 "serial_number": "SPDK00000000000001", 00:14:36.942 "subtype": "NVMe" 00:14:36.942 }, 00:14:36.942 { 00:14:36.942 "allow_any_host": true, 00:14:36.942 "hosts": [], 00:14:36.942 "listen_addresses": [ 00:14:36.942 { 00:14:36.942 "adrfam": "IPv4", 00:14:36.942 "traddr": "10.0.0.2", 00:14:36.942 "trsvcid": "4420", 00:14:36.942 "trtype": "TCP" 00:14:36.942 } 00:14:36.942 ], 00:14:36.942 "max_cntlid": 65519, 00:14:36.942 "max_namespaces": 32, 00:14:36.942 "min_cntlid": 1, 00:14:36.942 "model_number": "SPDK bdev Controller", 00:14:36.942 "namespaces": [ 00:14:36.942 { 00:14:36.942 "bdev_name": "Null2", 00:14:36.942 "name": "Null2", 00:14:36.942 "nguid": "893DC383DAB345B184A10C4CAE0559CE", 00:14:36.942 "nsid": 1, 00:14:36.942 "uuid": "893dc383-dab3-45b1-84a1-0c4cae0559ce" 00:14:36.942 } 00:14:36.942 ], 00:14:36.942 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:36.942 "serial_number": "SPDK00000000000002", 00:14:36.942 "subtype": "NVMe" 00:14:36.942 }, 00:14:36.942 { 00:14:36.942 "allow_any_host": true, 00:14:36.942 "hosts": [], 00:14:36.942 "listen_addresses": [ 00:14:36.942 { 00:14:36.942 "adrfam": "IPv4", 00:14:36.942 "traddr": "10.0.0.2", 00:14:36.942 "trsvcid": "4420", 00:14:36.942 "trtype": "TCP" 00:14:36.942 } 00:14:36.942 ], 00:14:36.942 "max_cntlid": 65519, 00:14:36.942 "max_namespaces": 32, 00:14:36.942 "min_cntlid": 1, 00:14:36.942 "model_number": "SPDK bdev Controller", 00:14:36.943 "namespaces": [ 00:14:36.943 { 00:14:36.943 "bdev_name": "Null3", 00:14:36.943 "name": "Null3", 00:14:36.943 "nguid": "12E7FF8D20DB4A2B99AC07408328F7AB", 00:14:36.943 "nsid": 1, 00:14:36.943 "uuid": "12e7ff8d-20db-4a2b-99ac-07408328f7ab" 00:14:36.943 } 00:14:36.943 ], 00:14:36.943 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:36.943 "serial_number": "SPDK00000000000003", 00:14:36.943 "subtype": "NVMe" 00:14:36.943 }, 00:14:36.943 { 00:14:36.943 "allow_any_host": true, 00:14:36.943 "hosts": [], 00:14:36.943 "listen_addresses": [ 00:14:36.943 { 00:14:36.943 "adrfam": "IPv4", 00:14:36.943 "traddr": "10.0.0.2", 00:14:36.943 "trsvcid": "4420", 00:14:36.943 "trtype": "TCP" 00:14:36.943 } 00:14:36.943 ], 00:14:36.943 "max_cntlid": 65519, 00:14:36.943 "max_namespaces": 32, 00:14:36.943 "min_cntlid": 1, 00:14:36.943 "model_number": "SPDK bdev Controller", 00:14:36.943 "namespaces": [ 00:14:36.943 { 00:14:36.943 "bdev_name": "Null4", 00:14:36.943 "name": "Null4", 00:14:36.943 "nguid": "7A56F63B2CFD4B73A5591A8FF5D1CFAE", 00:14:36.943 "nsid": 1, 00:14:36.943 "uuid": "7a56f63b-2cfd-4b73-a559-1a8ff5d1cfae" 00:14:36.943 } 00:14:36.943 ], 00:14:36.943 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:36.943 "serial_number": "SPDK00000000000004", 00:14:36.943 "subtype": "NVMe" 00:14:36.943 } 00:14:36.943 ] 00:14:36.943 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:36.943 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:36.943 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.943 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:22 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.943 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.943 rmmod nvme_tcp 00:14:36.943 rmmod nvme_fabrics 00:14:36.943 rmmod nvme_keyring 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 73658 ']' 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 73658 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 73658 ']' 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 73658 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73658 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:37.202 killing process with pid 73658 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73658' 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 73658 00:14:37.202 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 73658 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:37.461 00:14:37.461 real 0m2.353s 00:14:37.461 user 0m6.408s 00:14:37.461 sys 0m0.606s 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:37.461 ************************************ 00:14:37.461 END TEST nvmf_target_discovery 00:14:37.461 ************************************ 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:37.461 17:15:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:37.461 ************************************ 00:14:37.461 START TEST nvmf_referrals 00:14:37.461 ************************************ 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:37.462 * Looking for test storage... 00:14:37.462 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:37.462 Cannot find device "nvmf_tgt_br" 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:37.462 Cannot find device "nvmf_tgt_br2" 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:14:37.462 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:37.721 Cannot find device "nvmf_tgt_br" 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:37.721 Cannot find device "nvmf_tgt_br2" 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:37.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:37.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:37.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:14:37.721 00:14:37.721 --- 10.0.0.2 ping statistics --- 00:14:37.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.721 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:37.721 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:37.721 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:14:37.721 00:14:37.721 --- 10.0.0.3 ping statistics --- 00:14:37.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.721 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:37.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:37.721 00:14:37.721 --- 10.0.0.1 ping statistics --- 00:14:37.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.721 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.721 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=73886 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 73886 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 73886 ']' 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.980 17:15:23 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:37.980 [2024-07-24 17:15:24.038291] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:37.980 [2024-07-24 17:15:24.038448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.980 [2024-07-24 17:15:24.180509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.238 [2024-07-24 17:15:24.297716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.238 [2024-07-24 17:15:24.297760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.238 [2024-07-24 17:15:24.297769] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.238 [2024-07-24 17:15:24.297777] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.238 [2024-07-24 17:15:24.297784] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.238 [2024-07-24 17:15:24.297999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.238 [2024-07-24 17:15:24.298183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.238 [2024-07-24 17:15:24.298876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.238 [2024-07-24 17:15:24.298885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.804 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.804 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:38.804 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.804 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.804 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.063 [2024-07-24 17:15:25.053065] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.063 [2024-07-24 17:15:25.076659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:39.063 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:39.064 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.322 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:39.323 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.581 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:39.849 17:15:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:39.849 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:40.108 rmmod nvme_tcp 00:14:40.108 rmmod nvme_fabrics 00:14:40.108 rmmod nvme_keyring 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 73886 ']' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 73886 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 73886 ']' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 73886 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73886 00:14:40.108 killing process with pid 73886 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73886' 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 73886 00:14:40.108 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 73886 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:40.367 00:14:40.367 real 0m3.020s 00:14:40.367 user 0m9.806s 00:14:40.367 sys 0m0.799s 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:40.367 ************************************ 00:14:40.367 END TEST nvmf_referrals 00:14:40.367 ************************************ 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:40.367 17:15:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:40.627 ************************************ 00:14:40.627 START TEST nvmf_connect_disconnect 00:14:40.627 ************************************ 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:40.627 * Looking for test storage... 00:14:40.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:40.627 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:40.628 Cannot find device "nvmf_tgt_br" 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:40.628 Cannot find device "nvmf_tgt_br2" 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:40.628 Cannot find device "nvmf_tgt_br" 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:40.628 Cannot find device "nvmf_tgt_br2" 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:40.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:40.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:40.628 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:40.887 17:15:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:40.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:14:40.887 00:14:40.887 --- 10.0.0.2 ping statistics --- 00:14:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.887 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:40.887 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:40.887 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:14:40.887 00:14:40.887 --- 10.0.0.3 ping statistics --- 00:14:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.887 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:40.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:40.887 00:14:40.887 --- 10.0.0.1 ping statistics --- 00:14:40.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.887 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=74193 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 74193 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 74193 ']' 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:40.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:40.887 17:15:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:41.145 [2024-07-24 17:15:27.130855] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:41.146 [2024-07-24 17:15:27.130960] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.146 [2024-07-24 17:15:27.271992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.404 [2024-07-24 17:15:27.389088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.404 [2024-07-24 17:15:27.389194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.404 [2024-07-24 17:15:27.389206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.404 [2024-07-24 17:15:27.389214] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.404 [2024-07-24 17:15:27.389221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.404 [2024-07-24 17:15:27.389303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.404 [2024-07-24 17:15:27.389592] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.404 [2024-07-24 17:15:27.390154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.404 [2024-07-24 17:15:27.390167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.969 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:42.228 [2024-07-24 17:15:28.206172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:42.228 [2024-07-24 17:15:28.273696] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:42.228 17:15:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:44.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:53.702 rmmod nvme_tcp 00:14:53.702 rmmod nvme_fabrics 00:14:53.702 rmmod nvme_keyring 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 74193 ']' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 74193 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 74193 ']' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 74193 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74193 00:14:53.702 killing process with pid 74193 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74193' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 74193 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 74193 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:53.702 00:14:53.702 real 0m13.316s 00:14:53.702 user 0m48.886s 00:14:53.702 sys 0m1.924s 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:53.702 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:53.702 ************************************ 00:14:53.702 END TEST nvmf_connect_disconnect 00:14:53.702 ************************************ 00:14:53.961 17:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:53.961 17:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:53.961 17:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:53.961 17:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:53.961 ************************************ 00:14:53.961 START TEST nvmf_multitarget 00:14:53.961 ************************************ 00:14:53.961 17:15:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:53.961 * Looking for test storage... 00:14:53.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:53.961 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:53.962 Cannot find device "nvmf_tgt_br" 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:53.962 Cannot find device "nvmf_tgt_br2" 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:53.962 Cannot find device "nvmf_tgt_br" 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:53.962 Cannot find device "nvmf_tgt_br2" 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:14:53.962 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:54.220 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:54.220 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.221 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:54.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:14:54.221 00:14:54.221 --- 10.0.0.2 ping statistics --- 00:14:54.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.221 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:54.221 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:54.221 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:54.221 00:14:54.221 --- 10.0.0.3 ping statistics --- 00:14:54.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.221 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:54.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:54.221 00:14:54.221 --- 10.0.0.1 ping statistics --- 00:14:54.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.221 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.221 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=74591 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 74591 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 74591 ']' 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:54.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:54.480 17:15:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:54.480 [2024-07-24 17:15:40.514484] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:54.480 [2024-07-24 17:15:40.514580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.480 [2024-07-24 17:15:40.649707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.738 [2024-07-24 17:15:40.765882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.738 [2024-07-24 17:15:40.765946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.738 [2024-07-24 17:15:40.765958] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.738 [2024-07-24 17:15:40.765966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.738 [2024-07-24 17:15:40.765974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.738 [2024-07-24 17:15:40.766800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.738 [2024-07-24 17:15:40.766921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.738 [2024-07-24 17:15:40.767000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.738 [2024-07-24 17:15:40.767002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:55.673 "nvmf_tgt_1" 00:14:55.673 17:15:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:55.931 "nvmf_tgt_2" 00:14:55.931 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:55.931 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:55.931 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:55.931 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:56.190 true 00:14:56.190 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:56.190 true 00:14:56.190 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:56.190 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.448 rmmod nvme_tcp 00:14:56.448 rmmod nvme_fabrics 00:14:56.448 rmmod nvme_keyring 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 74591 ']' 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 74591 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 74591 ']' 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 74591 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74591 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:56.448 killing process with pid 74591 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74591' 00:14:56.448 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 74591 00:14:56.449 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 74591 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.755 00:14:56.755 real 0m2.964s 00:14:56.755 user 0m9.686s 00:14:56.755 sys 0m0.733s 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:56.755 ************************************ 00:14:56.755 END TEST nvmf_multitarget 00:14:56.755 ************************************ 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:56.755 17:15:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.015 ************************************ 00:14:57.015 START TEST nvmf_rpc 00:14:57.015 ************************************ 00:14:57.015 17:15:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:57.015 * Looking for test storage... 00:14:57.015 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:57.015 Cannot find device "nvmf_tgt_br" 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.015 Cannot find device "nvmf_tgt_br2" 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:57.015 Cannot find device "nvmf_tgt_br" 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:57.015 Cannot find device "nvmf_tgt_br2" 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:57.015 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.016 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:14:57.016 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.016 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.016 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:14:57.016 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.016 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.016 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:57.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:14:57.284 00:14:57.284 --- 10.0.0.2 ping statistics --- 00:14:57.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.284 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:57.284 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.284 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:14:57.284 00:14:57.284 --- 10.0.0.3 ping statistics --- 00:14:57.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.284 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:57.284 00:14:57.284 --- 10.0.0.1 ping statistics --- 00:14:57.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.284 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=74823 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 74823 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 74823 ']' 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:57.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:57.284 17:15:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.548 [2024-07-24 17:15:43.536131] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:14:57.548 [2024-07-24 17:15:43.536227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.548 [2024-07-24 17:15:43.677034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:57.806 [2024-07-24 17:15:43.801584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.806 [2024-07-24 17:15:43.801652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.806 [2024-07-24 17:15:43.801664] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.806 [2024-07-24 17:15:43.801672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.806 [2024-07-24 17:15:43.801680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.806 [2024-07-24 17:15:43.801846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.806 [2024-07-24 17:15:43.801977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.806 [2024-07-24 17:15:43.802678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:57.806 [2024-07-24 17:15:43.802689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.373 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.631 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.631 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:58.631 "poll_groups": [ 00:14:58.631 { 00:14:58.631 "admin_qpairs": 0, 00:14:58.631 "completed_nvme_io": 0, 00:14:58.631 "current_admin_qpairs": 0, 00:14:58.631 "current_io_qpairs": 0, 00:14:58.631 "io_qpairs": 0, 00:14:58.631 "name": "nvmf_tgt_poll_group_000", 00:14:58.631 "pending_bdev_io": 0, 00:14:58.631 "transports": [] 00:14:58.631 }, 00:14:58.631 { 00:14:58.631 "admin_qpairs": 0, 00:14:58.631 "completed_nvme_io": 0, 00:14:58.631 "current_admin_qpairs": 0, 00:14:58.631 "current_io_qpairs": 0, 00:14:58.631 "io_qpairs": 0, 00:14:58.631 "name": "nvmf_tgt_poll_group_001", 00:14:58.631 "pending_bdev_io": 0, 00:14:58.631 "transports": [] 00:14:58.631 }, 00:14:58.631 { 00:14:58.631 "admin_qpairs": 0, 00:14:58.631 "completed_nvme_io": 0, 00:14:58.631 "current_admin_qpairs": 0, 00:14:58.631 "current_io_qpairs": 0, 00:14:58.631 "io_qpairs": 0, 00:14:58.631 "name": "nvmf_tgt_poll_group_002", 00:14:58.631 "pending_bdev_io": 0, 00:14:58.631 "transports": [] 00:14:58.631 }, 00:14:58.631 { 00:14:58.631 "admin_qpairs": 0, 00:14:58.631 "completed_nvme_io": 0, 00:14:58.631 "current_admin_qpairs": 0, 00:14:58.631 "current_io_qpairs": 0, 00:14:58.631 "io_qpairs": 0, 00:14:58.631 "name": "nvmf_tgt_poll_group_003", 00:14:58.631 "pending_bdev_io": 0, 00:14:58.631 "transports": [] 00:14:58.631 } 00:14:58.631 ], 00:14:58.631 "tick_rate": 2200000000 00:14:58.631 }' 00:14:58.631 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.632 [2024-07-24 17:15:44.736071] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:58.632 "poll_groups": [ 00:14:58.632 { 00:14:58.632 "admin_qpairs": 0, 00:14:58.632 "completed_nvme_io": 0, 00:14:58.632 "current_admin_qpairs": 0, 00:14:58.632 "current_io_qpairs": 0, 00:14:58.632 "io_qpairs": 0, 00:14:58.632 "name": "nvmf_tgt_poll_group_000", 00:14:58.632 "pending_bdev_io": 0, 00:14:58.632 "transports": [ 00:14:58.632 { 00:14:58.632 "trtype": "TCP" 00:14:58.632 } 00:14:58.632 ] 00:14:58.632 }, 00:14:58.632 { 00:14:58.632 "admin_qpairs": 0, 00:14:58.632 "completed_nvme_io": 0, 00:14:58.632 "current_admin_qpairs": 0, 00:14:58.632 "current_io_qpairs": 0, 00:14:58.632 "io_qpairs": 0, 00:14:58.632 "name": "nvmf_tgt_poll_group_001", 00:14:58.632 "pending_bdev_io": 0, 00:14:58.632 "transports": [ 00:14:58.632 { 00:14:58.632 "trtype": "TCP" 00:14:58.632 } 00:14:58.632 ] 00:14:58.632 }, 00:14:58.632 { 00:14:58.632 "admin_qpairs": 0, 00:14:58.632 "completed_nvme_io": 0, 00:14:58.632 "current_admin_qpairs": 0, 00:14:58.632 "current_io_qpairs": 0, 00:14:58.632 "io_qpairs": 0, 00:14:58.632 "name": "nvmf_tgt_poll_group_002", 00:14:58.632 "pending_bdev_io": 0, 00:14:58.632 "transports": [ 00:14:58.632 { 00:14:58.632 "trtype": "TCP" 00:14:58.632 } 00:14:58.632 ] 00:14:58.632 }, 00:14:58.632 { 00:14:58.632 "admin_qpairs": 0, 00:14:58.632 "completed_nvme_io": 0, 00:14:58.632 "current_admin_qpairs": 0, 00:14:58.632 "current_io_qpairs": 0, 00:14:58.632 "io_qpairs": 0, 00:14:58.632 "name": "nvmf_tgt_poll_group_003", 00:14:58.632 "pending_bdev_io": 0, 00:14:58.632 "transports": [ 00:14:58.632 { 00:14:58.632 "trtype": "TCP" 00:14:58.632 } 00:14:58.632 ] 00:14:58.632 } 00:14:58.632 ], 00:14:58.632 "tick_rate": 2200000000 00:14:58.632 }' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:58.632 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 Malloc1 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 [2024-07-24 17:15:44.948032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -a 10.0.0.2 -s 4420 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -a 10.0.0.2 -s 4420 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -a 10.0.0.2 -s 4420 00:14:58.891 [2024-07-24 17:15:44.976311] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b' 00:14:58.891 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:58.891 could not add new controller: failed to write to nvme-fabrics device 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.891 17:15:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.149 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.149 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:59.149 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.149 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:59.149 17:15:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:01.050 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:01.050 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.050 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:01.050 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:01.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.051 [2024-07-24 17:15:47.278025] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b' 00:15:01.051 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:01.051 could not add new controller: failed to write to nvme-fabrics device 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.051 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:01.309 17:15:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 [2024-07-24 17:15:49.579304] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:03.839 17:15:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:05.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.749 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.750 [2024-07-24 17:15:51.882487] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.750 17:15:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:06.009 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:06.009 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:06.009 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:06.009 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:06.009 17:15:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:08.051 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.052 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.309 [2024-07-24 17:15:54.290257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:08.309 17:15:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:10.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.840 [2024-07-24 17:15:56.706445] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:10.840 17:15:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:12.741 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.999 17:15:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 [2024-07-24 17:15:59.013947] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:12.999 17:15:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:15:15.528 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:15.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 [2024-07-24 17:16:01.317344] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 [2024-07-24 17:16:01.369366] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 [2024-07-24 17:16:01.417387] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.529 [2024-07-24 17:16:01.469377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.529 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 [2024-07-24 17:16:01.517525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:15.530 "poll_groups": [ 00:15:15.530 { 00:15:15.530 "admin_qpairs": 2, 00:15:15.530 "completed_nvme_io": 65, 00:15:15.530 "current_admin_qpairs": 0, 00:15:15.530 "current_io_qpairs": 0, 00:15:15.530 "io_qpairs": 16, 00:15:15.530 "name": "nvmf_tgt_poll_group_000", 00:15:15.530 "pending_bdev_io": 0, 00:15:15.530 "transports": [ 00:15:15.530 { 00:15:15.530 "trtype": "TCP" 00:15:15.530 } 00:15:15.530 ] 00:15:15.530 }, 00:15:15.530 { 00:15:15.530 "admin_qpairs": 3, 00:15:15.530 "completed_nvme_io": 118, 00:15:15.530 "current_admin_qpairs": 0, 00:15:15.530 "current_io_qpairs": 0, 00:15:15.530 "io_qpairs": 17, 00:15:15.530 "name": "nvmf_tgt_poll_group_001", 00:15:15.530 "pending_bdev_io": 0, 00:15:15.530 "transports": [ 00:15:15.530 { 00:15:15.530 "trtype": "TCP" 00:15:15.530 } 00:15:15.530 ] 00:15:15.530 }, 00:15:15.530 { 00:15:15.530 "admin_qpairs": 1, 00:15:15.530 "completed_nvme_io": 167, 00:15:15.530 "current_admin_qpairs": 0, 00:15:15.530 "current_io_qpairs": 0, 00:15:15.530 "io_qpairs": 19, 00:15:15.530 "name": "nvmf_tgt_poll_group_002", 00:15:15.530 "pending_bdev_io": 0, 00:15:15.530 "transports": [ 00:15:15.530 { 00:15:15.530 "trtype": "TCP" 00:15:15.530 } 00:15:15.530 ] 00:15:15.530 }, 00:15:15.530 { 00:15:15.530 "admin_qpairs": 1, 00:15:15.530 "completed_nvme_io": 70, 00:15:15.530 "current_admin_qpairs": 0, 00:15:15.530 "current_io_qpairs": 0, 00:15:15.530 "io_qpairs": 18, 00:15:15.530 "name": "nvmf_tgt_poll_group_003", 00:15:15.530 "pending_bdev_io": 0, 00:15:15.530 "transports": [ 00:15:15.530 { 00:15:15.530 "trtype": "TCP" 00:15:15.530 } 00:15:15.530 ] 00:15:15.530 } 00:15:15.530 ], 00:15:15.530 "tick_rate": 2200000000 00:15:15.530 }' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:15.530 rmmod nvme_tcp 00:15:15.530 rmmod nvme_fabrics 00:15:15.530 rmmod nvme_keyring 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 74823 ']' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 74823 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 74823 ']' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 74823 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:15.530 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74823 00:15:15.789 killing process with pid 74823 00:15:15.789 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:15.789 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:15.789 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74823' 00:15:15.789 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 74823 00:15:15.789 17:16:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 74823 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:16.049 ************************************ 00:15:16.049 END TEST nvmf_rpc 00:15:16.049 ************************************ 00:15:16.049 00:15:16.049 real 0m19.077s 00:15:16.049 user 1m11.549s 00:15:16.049 sys 0m2.662s 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.049 ************************************ 00:15:16.049 START TEST nvmf_invalid 00:15:16.049 ************************************ 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:16.049 * Looking for test storage... 00:15:16.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.049 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:16.050 Cannot find device "nvmf_tgt_br" 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:16.050 Cannot find device "nvmf_tgt_br2" 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:16.050 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:16.339 Cannot find device "nvmf_tgt_br" 00:15:16.339 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:16.340 Cannot find device "nvmf_tgt_br2" 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:16.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:16.340 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:16.340 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:16.598 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:16.598 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:16.598 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:16.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:16.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:16.598 00:15:16.598 --- 10.0.0.2 ping statistics --- 00:15:16.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.598 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:16.598 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:16.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:16.599 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:15:16.599 00:15:16.599 --- 10.0.0.3 ping statistics --- 00:15:16.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.599 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:16.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:16.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:15:16.599 00:15:16.599 --- 10.0.0.1 ping statistics --- 00:15:16.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:16.599 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=75334 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 75334 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 75334 ']' 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:16.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:16.599 17:16:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:16.599 [2024-07-24 17:16:02.662970] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:16.599 [2024-07-24 17:16:02.663081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.599 [2024-07-24 17:16:02.800570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.856 [2024-07-24 17:16:02.931731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.857 [2024-07-24 17:16:02.931808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.857 [2024-07-24 17:16:02.931823] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.857 [2024-07-24 17:16:02.931834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.857 [2024-07-24 17:16:02.931843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.857 [2024-07-24 17:16:02.931975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.857 [2024-07-24 17:16:02.932872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.857 [2024-07-24 17:16:02.932944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.857 [2024-07-24 17:16:02.932950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.421 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.421 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:17.421 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:17.421 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.421 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:17.679 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.679 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:17.679 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode17035 00:15:17.679 [2024-07-24 17:16:03.889114] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:17.679 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/07/24 17:16:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17035 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:15:17.679 request: 00:15:17.679 { 00:15:17.679 "method": "nvmf_create_subsystem", 00:15:17.679 "params": { 00:15:17.679 "nqn": "nqn.2016-06.io.spdk:cnode17035", 00:15:17.679 "tgt_name": "foobar" 00:15:17.679 } 00:15:17.679 } 00:15:17.679 Got JSON-RPC error response 00:15:17.679 GoRPCClient: error on JSON-RPC call' 00:15:17.679 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/07/24 17:16:03 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17035 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:15:17.679 request: 00:15:17.679 { 00:15:17.679 "method": "nvmf_create_subsystem", 00:15:17.679 "params": { 00:15:17.679 "nqn": "nqn.2016-06.io.spdk:cnode17035", 00:15:17.679 "tgt_name": "foobar" 00:15:17.679 } 00:15:17.679 } 00:15:17.679 Got JSON-RPC error response 00:15:17.679 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:17.937 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:17.937 17:16:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17855 00:15:18.193 [2024-07-24 17:16:04.173403] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17855: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:18.193 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/07/24 17:16:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17855 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:15:18.193 request: 00:15:18.193 { 00:15:18.193 "method": "nvmf_create_subsystem", 00:15:18.193 "params": { 00:15:18.193 "nqn": "nqn.2016-06.io.spdk:cnode17855", 00:15:18.193 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:15:18.193 } 00:15:18.193 } 00:15:18.193 Got JSON-RPC error response 00:15:18.193 GoRPCClient: error on JSON-RPC call' 00:15:18.193 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/07/24 17:16:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode17855 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:15:18.193 request: 00:15:18.193 { 00:15:18.193 "method": "nvmf_create_subsystem", 00:15:18.194 "params": { 00:15:18.194 "nqn": "nqn.2016-06.io.spdk:cnode17855", 00:15:18.194 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:15:18.194 } 00:15:18.194 } 00:15:18.194 Got JSON-RPC error response 00:15:18.194 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:18.194 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:18.194 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23426 00:15:18.194 [2024-07-24 17:16:04.421606] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23426: invalid model number 'SPDK_Controller' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/07/24 17:16:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23426], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:15:18.452 request: 00:15:18.452 { 00:15:18.452 "method": "nvmf_create_subsystem", 00:15:18.452 "params": { 00:15:18.452 "nqn": "nqn.2016-06.io.spdk:cnode23426", 00:15:18.452 "model_number": "SPDK_Controller\u001f" 00:15:18.452 } 00:15:18.452 } 00:15:18.452 Got JSON-RPC error response 00:15:18.452 GoRPCClient: error on JSON-RPC call' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/07/24 17:16:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode23426], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:15:18.452 request: 00:15:18.452 { 00:15:18.452 "method": "nvmf_create_subsystem", 00:15:18.452 "params": { 00:15:18.452 "nqn": "nqn.2016-06.io.spdk:cnode23426", 00:15:18.452 "model_number": "SPDK_Controller\u001f" 00:15:18.452 } 00:15:18.452 } 00:15:18.452 Got JSON-RPC error response 00:15:18.452 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.452 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Z7bl[N0Fi>Sk%#h%!Gzy' 00:15:18.453 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Z7bl[N0Fi>Sk%#h%!Gzy' nqn.2016-06.io.spdk:cnode10561 00:15:18.711 [2024-07-24 17:16:04.822016] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10561: invalid serial number 'Z7bl[N0Fi>Sk%#h%!Gzy' 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/07/24 17:16:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10561 serial_number:Z7bl[N0Fi>Sk%#h%!Gzy], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Z7bl[N0Fi>Sk%#h%!Gzy 00:15:18.711 request: 00:15:18.711 { 00:15:18.711 "method": "nvmf_create_subsystem", 00:15:18.711 "params": { 00:15:18.711 "nqn": "nqn.2016-06.io.spdk:cnode10561", 00:15:18.711 "serial_number": "Z7bl[N0Fi>Sk%#\u007fh%!Gzy" 00:15:18.711 } 00:15:18.711 } 00:15:18.711 Got JSON-RPC error response 00:15:18.711 GoRPCClient: error on JSON-RPC call' 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/07/24 17:16:04 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode10561 serial_number:Z7bl[N0Fi>Sk%#h%!Gzy], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN Z7bl[N0Fi>Sk%#h%!Gzy 00:15:18.711 request: 00:15:18.711 { 00:15:18.711 "method": "nvmf_create_subsystem", 00:15:18.711 "params": { 00:15:18.711 "nqn": "nqn.2016-06.io.spdk:cnode10561", 00:15:18.711 "serial_number": "Z7bl[N0Fi>Sk%#\u007fh%!Gzy" 00:15:18.711 } 00:15:18.711 } 00:15:18.711 Got JSON-RPC error response 00:15:18.711 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:18.711 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:18.712 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:18.972 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ I == \- ]] 00:15:18.973 17:16:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'IO=wEhnYa9pwVNjt(F /dev/null' 00:15:21.807 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:21.807 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:21.807 00:15:21.807 real 0m5.797s 00:15:21.807 user 0m22.920s 00:15:21.807 sys 0m1.289s 00:15:21.807 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:21.807 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:21.807 ************************************ 00:15:21.808 END TEST nvmf_invalid 00:15:21.808 ************************************ 00:15:21.808 17:16:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:21.808 17:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:21.808 17:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:21.808 17:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.808 ************************************ 00:15:21.808 START TEST nvmf_connect_stress 00:15:21.808 ************************************ 00:15:21.808 17:16:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:22.067 * Looking for test storage... 00:15:22.067 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:22.067 Cannot find device "nvmf_tgt_br" 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:22.067 Cannot find device "nvmf_tgt_br2" 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:22.067 Cannot find device "nvmf_tgt_br" 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:22.067 Cannot find device "nvmf_tgt_br2" 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:15:22.067 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:22.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:22.068 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:22.068 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:22.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:22.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:15:22.326 00:15:22.326 --- 10.0.0.2 ping statistics --- 00:15:22.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.326 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:22.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:22.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:15:22.326 00:15:22.326 --- 10.0.0.3 ping statistics --- 00:15:22.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.326 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:22.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:22.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:22.326 00:15:22.326 --- 10.0.0.1 ping statistics --- 00:15:22.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:22.326 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=75843 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 75843 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 75843 ']' 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.326 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:22.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.327 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.327 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:22.327 17:16:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.327 [2024-07-24 17:16:08.446256] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:22.327 [2024-07-24 17:16:08.446380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.585 [2024-07-24 17:16:08.585764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:22.585 [2024-07-24 17:16:08.700340] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.585 [2024-07-24 17:16:08.700393] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.585 [2024-07-24 17:16:08.700405] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.585 [2024-07-24 17:16:08.700414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.585 [2024-07-24 17:16:08.700421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.585 [2024-07-24 17:16:08.701281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:22.585 [2024-07-24 17:16:08.701363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:22.585 [2024-07-24 17:16:08.701366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.519 [2024-07-24 17:16:09.489992] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.519 [2024-07-24 17:16:09.510076] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.519 NULL1 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=75896 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.519 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.520 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.777 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.777 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:23.777 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.777 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.777 17:16:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.035 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.035 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:24.035 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.035 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.035 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.626 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.626 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:24.626 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.627 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.627 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.884 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.884 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:24.884 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.884 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.884 17:16:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.142 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.142 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:25.142 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.142 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.142 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.401 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.401 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:25.401 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.401 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.401 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.660 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.660 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:25.660 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.660 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.660 17:16:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.226 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.226 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:26.226 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.226 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.226 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.483 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.483 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:26.483 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.483 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.483 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.741 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.741 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:26.741 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.741 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.741 17:16:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.999 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.999 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:26.999 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.999 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.999 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.264 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.264 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:27.264 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.264 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.264 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.833 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.833 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:27.833 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.833 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.833 17:16:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.091 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.091 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:28.091 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.091 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.091 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.350 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.350 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:28.350 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.350 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.350 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.610 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.610 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:28.610 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.610 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.610 17:16:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.868 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.868 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:28.868 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.868 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.868 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.433 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.433 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:29.433 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.433 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.433 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.691 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.691 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:29.691 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.691 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.691 17:16:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.950 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.950 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:29.950 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.950 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.950 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.208 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.208 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:30.208 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.208 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.208 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.466 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.466 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:30.466 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.466 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.466 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.031 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.031 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:31.031 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.031 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.031 17:16:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.289 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.289 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:31.289 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.289 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.289 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.548 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.548 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:31.548 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.548 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.548 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.806 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.806 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:31.806 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.806 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.806 17:16:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.064 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.064 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:32.064 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.064 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.064 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.632 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.632 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:32.632 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.632 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.632 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.892 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.892 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:32.892 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.892 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.892 17:16:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.153 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.153 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:33.153 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.153 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.153 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.410 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.410 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:33.410 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.410 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.410 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.666 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 75896 00:15:33.666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (75896) - No such process 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 75896 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:33.666 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:33.924 rmmod nvme_tcp 00:15:33.924 rmmod nvme_fabrics 00:15:33.924 rmmod nvme_keyring 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 75843 ']' 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 75843 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 75843 ']' 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 75843 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75843 00:15:33.924 killing process with pid 75843 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75843' 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 75843 00:15:33.924 17:16:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 75843 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:34.182 00:15:34.182 real 0m12.293s 00:15:34.182 user 0m40.950s 00:15:34.182 sys 0m3.410s 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.182 ************************************ 00:15:34.182 END TEST nvmf_connect_stress 00:15:34.182 ************************************ 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.182 ************************************ 00:15:34.182 START TEST nvmf_fused_ordering 00:15:34.182 ************************************ 00:15:34.182 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:34.182 * Looking for test storage... 00:15:34.182 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.183 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:34.441 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:34.442 Cannot find device "nvmf_tgt_br" 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:34.442 Cannot find device "nvmf_tgt_br2" 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:34.442 Cannot find device "nvmf_tgt_br" 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:34.442 Cannot find device "nvmf_tgt_br2" 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:34.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:34.442 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:34.442 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:34.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:34.700 00:15:34.700 --- 10.0.0.2 ping statistics --- 00:15:34.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.700 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:34.700 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:34.700 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:15:34.700 00:15:34.700 --- 10.0.0.3 ping statistics --- 00:15:34.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.700 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:34.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:15:34.700 00:15:34.700 --- 10.0.0.1 ping statistics --- 00:15:34.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.700 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:34.700 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=76227 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 76227 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 76227 ']' 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:34.701 17:16:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:34.701 [2024-07-24 17:16:20.809378] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:34.701 [2024-07-24 17:16:20.809653] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.958 [2024-07-24 17:16:20.942509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.958 [2024-07-24 17:16:21.061918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.958 [2024-07-24 17:16:21.061967] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.958 [2024-07-24 17:16:21.061993] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.958 [2024-07-24 17:16:21.062001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.958 [2024-07-24 17:16:21.062008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.958 [2024-07-24 17:16:21.062041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.891 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 [2024-07-24 17:16:21.842355] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 [2024-07-24 17:16:21.858490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 NULL1 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.892 17:16:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:35.892 [2024-07-24 17:16:21.909889] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:35.892 [2024-07-24 17:16:21.909937] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76278 ] 00:15:36.149 Attached to nqn.2016-06.io.spdk:cnode1 00:15:36.149 Namespace ID: 1 size: 1GB 00:15:36.149 fused_ordering(0) 00:15:36.149 fused_ordering(1) 00:15:36.149 fused_ordering(2) 00:15:36.149 fused_ordering(3) 00:15:36.149 fused_ordering(4) 00:15:36.149 fused_ordering(5) 00:15:36.149 fused_ordering(6) 00:15:36.149 fused_ordering(7) 00:15:36.149 fused_ordering(8) 00:15:36.149 fused_ordering(9) 00:15:36.149 fused_ordering(10) 00:15:36.149 fused_ordering(11) 00:15:36.149 fused_ordering(12) 00:15:36.149 fused_ordering(13) 00:15:36.149 fused_ordering(14) 00:15:36.149 fused_ordering(15) 00:15:36.149 fused_ordering(16) 00:15:36.149 fused_ordering(17) 00:15:36.149 fused_ordering(18) 00:15:36.150 fused_ordering(19) 00:15:36.150 fused_ordering(20) 00:15:36.150 fused_ordering(21) 00:15:36.150 fused_ordering(22) 00:15:36.150 fused_ordering(23) 00:15:36.150 fused_ordering(24) 00:15:36.150 fused_ordering(25) 00:15:36.150 fused_ordering(26) 00:15:36.150 fused_ordering(27) 00:15:36.150 fused_ordering(28) 00:15:36.150 fused_ordering(29) 00:15:36.150 fused_ordering(30) 00:15:36.150 fused_ordering(31) 00:15:36.150 fused_ordering(32) 00:15:36.150 fused_ordering(33) 00:15:36.150 fused_ordering(34) 00:15:36.150 fused_ordering(35) 00:15:36.150 fused_ordering(36) 00:15:36.150 fused_ordering(37) 00:15:36.150 fused_ordering(38) 00:15:36.150 fused_ordering(39) 00:15:36.150 fused_ordering(40) 00:15:36.150 fused_ordering(41) 00:15:36.150 fused_ordering(42) 00:15:36.150 fused_ordering(43) 00:15:36.150 fused_ordering(44) 00:15:36.150 fused_ordering(45) 00:15:36.150 fused_ordering(46) 00:15:36.150 fused_ordering(47) 00:15:36.150 fused_ordering(48) 00:15:36.150 fused_ordering(49) 00:15:36.150 fused_ordering(50) 00:15:36.150 fused_ordering(51) 00:15:36.150 fused_ordering(52) 00:15:36.150 fused_ordering(53) 00:15:36.150 fused_ordering(54) 00:15:36.150 fused_ordering(55) 00:15:36.150 fused_ordering(56) 00:15:36.150 fused_ordering(57) 00:15:36.150 fused_ordering(58) 00:15:36.150 fused_ordering(59) 00:15:36.150 fused_ordering(60) 00:15:36.150 fused_ordering(61) 00:15:36.150 fused_ordering(62) 00:15:36.150 fused_ordering(63) 00:15:36.150 fused_ordering(64) 00:15:36.150 fused_ordering(65) 00:15:36.150 fused_ordering(66) 00:15:36.150 fused_ordering(67) 00:15:36.150 fused_ordering(68) 00:15:36.150 fused_ordering(69) 00:15:36.150 fused_ordering(70) 00:15:36.150 fused_ordering(71) 00:15:36.150 fused_ordering(72) 00:15:36.150 fused_ordering(73) 00:15:36.150 fused_ordering(74) 00:15:36.150 fused_ordering(75) 00:15:36.150 fused_ordering(76) 00:15:36.150 fused_ordering(77) 00:15:36.150 fused_ordering(78) 00:15:36.150 fused_ordering(79) 00:15:36.150 fused_ordering(80) 00:15:36.150 fused_ordering(81) 00:15:36.150 fused_ordering(82) 00:15:36.150 fused_ordering(83) 00:15:36.150 fused_ordering(84) 00:15:36.150 fused_ordering(85) 00:15:36.150 fused_ordering(86) 00:15:36.150 fused_ordering(87) 00:15:36.150 fused_ordering(88) 00:15:36.150 fused_ordering(89) 00:15:36.150 fused_ordering(90) 00:15:36.150 fused_ordering(91) 00:15:36.150 fused_ordering(92) 00:15:36.150 fused_ordering(93) 00:15:36.150 fused_ordering(94) 00:15:36.150 fused_ordering(95) 00:15:36.150 fused_ordering(96) 00:15:36.150 fused_ordering(97) 00:15:36.150 fused_ordering(98) 00:15:36.150 fused_ordering(99) 00:15:36.150 fused_ordering(100) 00:15:36.150 fused_ordering(101) 00:15:36.150 fused_ordering(102) 00:15:36.150 fused_ordering(103) 00:15:36.150 fused_ordering(104) 00:15:36.150 fused_ordering(105) 00:15:36.150 fused_ordering(106) 00:15:36.150 fused_ordering(107) 00:15:36.150 fused_ordering(108) 00:15:36.150 fused_ordering(109) 00:15:36.150 fused_ordering(110) 00:15:36.150 fused_ordering(111) 00:15:36.150 fused_ordering(112) 00:15:36.150 fused_ordering(113) 00:15:36.150 fused_ordering(114) 00:15:36.150 fused_ordering(115) 00:15:36.150 fused_ordering(116) 00:15:36.150 fused_ordering(117) 00:15:36.150 fused_ordering(118) 00:15:36.150 fused_ordering(119) 00:15:36.150 fused_ordering(120) 00:15:36.150 fused_ordering(121) 00:15:36.150 fused_ordering(122) 00:15:36.150 fused_ordering(123) 00:15:36.150 fused_ordering(124) 00:15:36.150 fused_ordering(125) 00:15:36.150 fused_ordering(126) 00:15:36.150 fused_ordering(127) 00:15:36.150 fused_ordering(128) 00:15:36.150 fused_ordering(129) 00:15:36.150 fused_ordering(130) 00:15:36.150 fused_ordering(131) 00:15:36.150 fused_ordering(132) 00:15:36.150 fused_ordering(133) 00:15:36.150 fused_ordering(134) 00:15:36.150 fused_ordering(135) 00:15:36.150 fused_ordering(136) 00:15:36.150 fused_ordering(137) 00:15:36.150 fused_ordering(138) 00:15:36.150 fused_ordering(139) 00:15:36.150 fused_ordering(140) 00:15:36.150 fused_ordering(141) 00:15:36.150 fused_ordering(142) 00:15:36.150 fused_ordering(143) 00:15:36.150 fused_ordering(144) 00:15:36.150 fused_ordering(145) 00:15:36.150 fused_ordering(146) 00:15:36.150 fused_ordering(147) 00:15:36.150 fused_ordering(148) 00:15:36.150 fused_ordering(149) 00:15:36.150 fused_ordering(150) 00:15:36.150 fused_ordering(151) 00:15:36.150 fused_ordering(152) 00:15:36.150 fused_ordering(153) 00:15:36.150 fused_ordering(154) 00:15:36.150 fused_ordering(155) 00:15:36.150 fused_ordering(156) 00:15:36.150 fused_ordering(157) 00:15:36.150 fused_ordering(158) 00:15:36.150 fused_ordering(159) 00:15:36.150 fused_ordering(160) 00:15:36.150 fused_ordering(161) 00:15:36.150 fused_ordering(162) 00:15:36.150 fused_ordering(163) 00:15:36.150 fused_ordering(164) 00:15:36.150 fused_ordering(165) 00:15:36.150 fused_ordering(166) 00:15:36.150 fused_ordering(167) 00:15:36.150 fused_ordering(168) 00:15:36.150 fused_ordering(169) 00:15:36.150 fused_ordering(170) 00:15:36.150 fused_ordering(171) 00:15:36.150 fused_ordering(172) 00:15:36.150 fused_ordering(173) 00:15:36.150 fused_ordering(174) 00:15:36.150 fused_ordering(175) 00:15:36.150 fused_ordering(176) 00:15:36.150 fused_ordering(177) 00:15:36.150 fused_ordering(178) 00:15:36.150 fused_ordering(179) 00:15:36.150 fused_ordering(180) 00:15:36.150 fused_ordering(181) 00:15:36.150 fused_ordering(182) 00:15:36.150 fused_ordering(183) 00:15:36.150 fused_ordering(184) 00:15:36.150 fused_ordering(185) 00:15:36.150 fused_ordering(186) 00:15:36.150 fused_ordering(187) 00:15:36.150 fused_ordering(188) 00:15:36.150 fused_ordering(189) 00:15:36.150 fused_ordering(190) 00:15:36.150 fused_ordering(191) 00:15:36.150 fused_ordering(192) 00:15:36.150 fused_ordering(193) 00:15:36.150 fused_ordering(194) 00:15:36.150 fused_ordering(195) 00:15:36.150 fused_ordering(196) 00:15:36.150 fused_ordering(197) 00:15:36.150 fused_ordering(198) 00:15:36.150 fused_ordering(199) 00:15:36.150 fused_ordering(200) 00:15:36.150 fused_ordering(201) 00:15:36.150 fused_ordering(202) 00:15:36.150 fused_ordering(203) 00:15:36.150 fused_ordering(204) 00:15:36.150 fused_ordering(205) 00:15:36.408 fused_ordering(206) 00:15:36.408 fused_ordering(207) 00:15:36.408 fused_ordering(208) 00:15:36.408 fused_ordering(209) 00:15:36.408 fused_ordering(210) 00:15:36.408 fused_ordering(211) 00:15:36.408 fused_ordering(212) 00:15:36.408 fused_ordering(213) 00:15:36.408 fused_ordering(214) 00:15:36.408 fused_ordering(215) 00:15:36.408 fused_ordering(216) 00:15:36.408 fused_ordering(217) 00:15:36.408 fused_ordering(218) 00:15:36.408 fused_ordering(219) 00:15:36.408 fused_ordering(220) 00:15:36.408 fused_ordering(221) 00:15:36.408 fused_ordering(222) 00:15:36.408 fused_ordering(223) 00:15:36.408 fused_ordering(224) 00:15:36.408 fused_ordering(225) 00:15:36.408 fused_ordering(226) 00:15:36.408 fused_ordering(227) 00:15:36.408 fused_ordering(228) 00:15:36.408 fused_ordering(229) 00:15:36.408 fused_ordering(230) 00:15:36.408 fused_ordering(231) 00:15:36.408 fused_ordering(232) 00:15:36.408 fused_ordering(233) 00:15:36.408 fused_ordering(234) 00:15:36.408 fused_ordering(235) 00:15:36.408 fused_ordering(236) 00:15:36.408 fused_ordering(237) 00:15:36.408 fused_ordering(238) 00:15:36.408 fused_ordering(239) 00:15:36.408 fused_ordering(240) 00:15:36.408 fused_ordering(241) 00:15:36.408 fused_ordering(242) 00:15:36.408 fused_ordering(243) 00:15:36.408 fused_ordering(244) 00:15:36.408 fused_ordering(245) 00:15:36.408 fused_ordering(246) 00:15:36.408 fused_ordering(247) 00:15:36.408 fused_ordering(248) 00:15:36.408 fused_ordering(249) 00:15:36.408 fused_ordering(250) 00:15:36.408 fused_ordering(251) 00:15:36.408 fused_ordering(252) 00:15:36.408 fused_ordering(253) 00:15:36.408 fused_ordering(254) 00:15:36.408 fused_ordering(255) 00:15:36.408 fused_ordering(256) 00:15:36.408 fused_ordering(257) 00:15:36.408 fused_ordering(258) 00:15:36.408 fused_ordering(259) 00:15:36.408 fused_ordering(260) 00:15:36.408 fused_ordering(261) 00:15:36.408 fused_ordering(262) 00:15:36.408 fused_ordering(263) 00:15:36.408 fused_ordering(264) 00:15:36.408 fused_ordering(265) 00:15:36.408 fused_ordering(266) 00:15:36.408 fused_ordering(267) 00:15:36.408 fused_ordering(268) 00:15:36.408 fused_ordering(269) 00:15:36.408 fused_ordering(270) 00:15:36.408 fused_ordering(271) 00:15:36.408 fused_ordering(272) 00:15:36.408 fused_ordering(273) 00:15:36.408 fused_ordering(274) 00:15:36.408 fused_ordering(275) 00:15:36.408 fused_ordering(276) 00:15:36.408 fused_ordering(277) 00:15:36.408 fused_ordering(278) 00:15:36.408 fused_ordering(279) 00:15:36.408 fused_ordering(280) 00:15:36.408 fused_ordering(281) 00:15:36.408 fused_ordering(282) 00:15:36.408 fused_ordering(283) 00:15:36.408 fused_ordering(284) 00:15:36.408 fused_ordering(285) 00:15:36.408 fused_ordering(286) 00:15:36.408 fused_ordering(287) 00:15:36.408 fused_ordering(288) 00:15:36.408 fused_ordering(289) 00:15:36.408 fused_ordering(290) 00:15:36.408 fused_ordering(291) 00:15:36.408 fused_ordering(292) 00:15:36.408 fused_ordering(293) 00:15:36.408 fused_ordering(294) 00:15:36.408 fused_ordering(295) 00:15:36.408 fused_ordering(296) 00:15:36.408 fused_ordering(297) 00:15:36.408 fused_ordering(298) 00:15:36.408 fused_ordering(299) 00:15:36.409 fused_ordering(300) 00:15:36.409 fused_ordering(301) 00:15:36.409 fused_ordering(302) 00:15:36.409 fused_ordering(303) 00:15:36.409 fused_ordering(304) 00:15:36.409 fused_ordering(305) 00:15:36.409 fused_ordering(306) 00:15:36.409 fused_ordering(307) 00:15:36.409 fused_ordering(308) 00:15:36.409 fused_ordering(309) 00:15:36.409 fused_ordering(310) 00:15:36.409 fused_ordering(311) 00:15:36.409 fused_ordering(312) 00:15:36.409 fused_ordering(313) 00:15:36.409 fused_ordering(314) 00:15:36.409 fused_ordering(315) 00:15:36.409 fused_ordering(316) 00:15:36.409 fused_ordering(317) 00:15:36.409 fused_ordering(318) 00:15:36.409 fused_ordering(319) 00:15:36.409 fused_ordering(320) 00:15:36.409 fused_ordering(321) 00:15:36.409 fused_ordering(322) 00:15:36.409 fused_ordering(323) 00:15:36.409 fused_ordering(324) 00:15:36.409 fused_ordering(325) 00:15:36.409 fused_ordering(326) 00:15:36.409 fused_ordering(327) 00:15:36.409 fused_ordering(328) 00:15:36.409 fused_ordering(329) 00:15:36.409 fused_ordering(330) 00:15:36.409 fused_ordering(331) 00:15:36.409 fused_ordering(332) 00:15:36.409 fused_ordering(333) 00:15:36.409 fused_ordering(334) 00:15:36.409 fused_ordering(335) 00:15:36.409 fused_ordering(336) 00:15:36.409 fused_ordering(337) 00:15:36.409 fused_ordering(338) 00:15:36.409 fused_ordering(339) 00:15:36.409 fused_ordering(340) 00:15:36.409 fused_ordering(341) 00:15:36.409 fused_ordering(342) 00:15:36.409 fused_ordering(343) 00:15:36.409 fused_ordering(344) 00:15:36.409 fused_ordering(345) 00:15:36.409 fused_ordering(346) 00:15:36.409 fused_ordering(347) 00:15:36.409 fused_ordering(348) 00:15:36.409 fused_ordering(349) 00:15:36.409 fused_ordering(350) 00:15:36.409 fused_ordering(351) 00:15:36.409 fused_ordering(352) 00:15:36.409 fused_ordering(353) 00:15:36.409 fused_ordering(354) 00:15:36.409 fused_ordering(355) 00:15:36.409 fused_ordering(356) 00:15:36.409 fused_ordering(357) 00:15:36.409 fused_ordering(358) 00:15:36.409 fused_ordering(359) 00:15:36.409 fused_ordering(360) 00:15:36.409 fused_ordering(361) 00:15:36.409 fused_ordering(362) 00:15:36.409 fused_ordering(363) 00:15:36.409 fused_ordering(364) 00:15:36.409 fused_ordering(365) 00:15:36.409 fused_ordering(366) 00:15:36.409 fused_ordering(367) 00:15:36.409 fused_ordering(368) 00:15:36.409 fused_ordering(369) 00:15:36.409 fused_ordering(370) 00:15:36.409 fused_ordering(371) 00:15:36.409 fused_ordering(372) 00:15:36.409 fused_ordering(373) 00:15:36.409 fused_ordering(374) 00:15:36.409 fused_ordering(375) 00:15:36.409 fused_ordering(376) 00:15:36.409 fused_ordering(377) 00:15:36.409 fused_ordering(378) 00:15:36.409 fused_ordering(379) 00:15:36.409 fused_ordering(380) 00:15:36.409 fused_ordering(381) 00:15:36.409 fused_ordering(382) 00:15:36.409 fused_ordering(383) 00:15:36.409 fused_ordering(384) 00:15:36.409 fused_ordering(385) 00:15:36.409 fused_ordering(386) 00:15:36.409 fused_ordering(387) 00:15:36.409 fused_ordering(388) 00:15:36.409 fused_ordering(389) 00:15:36.409 fused_ordering(390) 00:15:36.409 fused_ordering(391) 00:15:36.409 fused_ordering(392) 00:15:36.409 fused_ordering(393) 00:15:36.409 fused_ordering(394) 00:15:36.409 fused_ordering(395) 00:15:36.409 fused_ordering(396) 00:15:36.409 fused_ordering(397) 00:15:36.409 fused_ordering(398) 00:15:36.409 fused_ordering(399) 00:15:36.409 fused_ordering(400) 00:15:36.409 fused_ordering(401) 00:15:36.409 fused_ordering(402) 00:15:36.409 fused_ordering(403) 00:15:36.409 fused_ordering(404) 00:15:36.409 fused_ordering(405) 00:15:36.409 fused_ordering(406) 00:15:36.409 fused_ordering(407) 00:15:36.409 fused_ordering(408) 00:15:36.409 fused_ordering(409) 00:15:36.409 fused_ordering(410) 00:15:36.975 fused_ordering(411) 00:15:36.975 fused_ordering(412) 00:15:36.975 fused_ordering(413) 00:15:36.975 fused_ordering(414) 00:15:36.975 fused_ordering(415) 00:15:36.975 fused_ordering(416) 00:15:36.975 fused_ordering(417) 00:15:36.975 fused_ordering(418) 00:15:36.975 fused_ordering(419) 00:15:36.975 fused_ordering(420) 00:15:36.975 fused_ordering(421) 00:15:36.975 fused_ordering(422) 00:15:36.975 fused_ordering(423) 00:15:36.975 fused_ordering(424) 00:15:36.975 fused_ordering(425) 00:15:36.975 fused_ordering(426) 00:15:36.975 fused_ordering(427) 00:15:36.975 fused_ordering(428) 00:15:36.975 fused_ordering(429) 00:15:36.975 fused_ordering(430) 00:15:36.976 fused_ordering(431) 00:15:36.976 fused_ordering(432) 00:15:36.976 fused_ordering(433) 00:15:36.976 fused_ordering(434) 00:15:36.976 fused_ordering(435) 00:15:36.976 fused_ordering(436) 00:15:36.976 fused_ordering(437) 00:15:36.976 fused_ordering(438) 00:15:36.976 fused_ordering(439) 00:15:36.976 fused_ordering(440) 00:15:36.976 fused_ordering(441) 00:15:36.976 fused_ordering(442) 00:15:36.976 fused_ordering(443) 00:15:36.976 fused_ordering(444) 00:15:36.976 fused_ordering(445) 00:15:36.976 fused_ordering(446) 00:15:36.976 fused_ordering(447) 00:15:36.976 fused_ordering(448) 00:15:36.976 fused_ordering(449) 00:15:36.976 fused_ordering(450) 00:15:36.976 fused_ordering(451) 00:15:36.976 fused_ordering(452) 00:15:36.976 fused_ordering(453) 00:15:36.976 fused_ordering(454) 00:15:36.976 fused_ordering(455) 00:15:36.976 fused_ordering(456) 00:15:36.976 fused_ordering(457) 00:15:36.976 fused_ordering(458) 00:15:36.976 fused_ordering(459) 00:15:36.976 fused_ordering(460) 00:15:36.976 fused_ordering(461) 00:15:36.976 fused_ordering(462) 00:15:36.976 fused_ordering(463) 00:15:36.976 fused_ordering(464) 00:15:36.976 fused_ordering(465) 00:15:36.976 fused_ordering(466) 00:15:36.976 fused_ordering(467) 00:15:36.976 fused_ordering(468) 00:15:36.976 fused_ordering(469) 00:15:36.976 fused_ordering(470) 00:15:36.976 fused_ordering(471) 00:15:36.976 fused_ordering(472) 00:15:36.976 fused_ordering(473) 00:15:36.976 fused_ordering(474) 00:15:36.976 fused_ordering(475) 00:15:36.976 fused_ordering(476) 00:15:36.976 fused_ordering(477) 00:15:36.976 fused_ordering(478) 00:15:36.976 fused_ordering(479) 00:15:36.976 fused_ordering(480) 00:15:36.976 fused_ordering(481) 00:15:36.976 fused_ordering(482) 00:15:36.976 fused_ordering(483) 00:15:36.976 fused_ordering(484) 00:15:36.976 fused_ordering(485) 00:15:36.976 fused_ordering(486) 00:15:36.976 fused_ordering(487) 00:15:36.976 fused_ordering(488) 00:15:36.976 fused_ordering(489) 00:15:36.976 fused_ordering(490) 00:15:36.976 fused_ordering(491) 00:15:36.976 fused_ordering(492) 00:15:36.976 fused_ordering(493) 00:15:36.976 fused_ordering(494) 00:15:36.976 fused_ordering(495) 00:15:36.976 fused_ordering(496) 00:15:36.976 fused_ordering(497) 00:15:36.976 fused_ordering(498) 00:15:36.976 fused_ordering(499) 00:15:36.976 fused_ordering(500) 00:15:36.976 fused_ordering(501) 00:15:36.976 fused_ordering(502) 00:15:36.976 fused_ordering(503) 00:15:36.976 fused_ordering(504) 00:15:36.976 fused_ordering(505) 00:15:36.976 fused_ordering(506) 00:15:36.976 fused_ordering(507) 00:15:36.976 fused_ordering(508) 00:15:36.976 fused_ordering(509) 00:15:36.976 fused_ordering(510) 00:15:36.976 fused_ordering(511) 00:15:36.976 fused_ordering(512) 00:15:36.976 fused_ordering(513) 00:15:36.976 fused_ordering(514) 00:15:36.976 fused_ordering(515) 00:15:36.976 fused_ordering(516) 00:15:36.976 fused_ordering(517) 00:15:36.976 fused_ordering(518) 00:15:36.976 fused_ordering(519) 00:15:36.976 fused_ordering(520) 00:15:36.976 fused_ordering(521) 00:15:36.976 fused_ordering(522) 00:15:36.976 fused_ordering(523) 00:15:36.976 fused_ordering(524) 00:15:36.976 fused_ordering(525) 00:15:36.976 fused_ordering(526) 00:15:36.976 fused_ordering(527) 00:15:36.976 fused_ordering(528) 00:15:36.976 fused_ordering(529) 00:15:36.976 fused_ordering(530) 00:15:36.976 fused_ordering(531) 00:15:36.976 fused_ordering(532) 00:15:36.976 fused_ordering(533) 00:15:36.976 fused_ordering(534) 00:15:36.976 fused_ordering(535) 00:15:36.976 fused_ordering(536) 00:15:36.976 fused_ordering(537) 00:15:36.976 fused_ordering(538) 00:15:36.976 fused_ordering(539) 00:15:36.976 fused_ordering(540) 00:15:36.976 fused_ordering(541) 00:15:36.976 fused_ordering(542) 00:15:36.976 fused_ordering(543) 00:15:36.976 fused_ordering(544) 00:15:36.976 fused_ordering(545) 00:15:36.976 fused_ordering(546) 00:15:36.976 fused_ordering(547) 00:15:36.976 fused_ordering(548) 00:15:36.976 fused_ordering(549) 00:15:36.976 fused_ordering(550) 00:15:36.976 fused_ordering(551) 00:15:36.976 fused_ordering(552) 00:15:36.976 fused_ordering(553) 00:15:36.976 fused_ordering(554) 00:15:36.976 fused_ordering(555) 00:15:36.976 fused_ordering(556) 00:15:36.976 fused_ordering(557) 00:15:36.976 fused_ordering(558) 00:15:36.976 fused_ordering(559) 00:15:36.976 fused_ordering(560) 00:15:36.976 fused_ordering(561) 00:15:36.976 fused_ordering(562) 00:15:36.976 fused_ordering(563) 00:15:36.976 fused_ordering(564) 00:15:36.976 fused_ordering(565) 00:15:36.976 fused_ordering(566) 00:15:36.976 fused_ordering(567) 00:15:36.976 fused_ordering(568) 00:15:36.976 fused_ordering(569) 00:15:36.976 fused_ordering(570) 00:15:36.976 fused_ordering(571) 00:15:36.976 fused_ordering(572) 00:15:36.976 fused_ordering(573) 00:15:36.976 fused_ordering(574) 00:15:36.976 fused_ordering(575) 00:15:36.976 fused_ordering(576) 00:15:36.976 fused_ordering(577) 00:15:36.976 fused_ordering(578) 00:15:36.976 fused_ordering(579) 00:15:36.976 fused_ordering(580) 00:15:36.976 fused_ordering(581) 00:15:36.976 fused_ordering(582) 00:15:36.976 fused_ordering(583) 00:15:36.976 fused_ordering(584) 00:15:36.976 fused_ordering(585) 00:15:36.976 fused_ordering(586) 00:15:36.976 fused_ordering(587) 00:15:36.976 fused_ordering(588) 00:15:36.976 fused_ordering(589) 00:15:36.976 fused_ordering(590) 00:15:36.976 fused_ordering(591) 00:15:36.976 fused_ordering(592) 00:15:36.976 fused_ordering(593) 00:15:36.976 fused_ordering(594) 00:15:36.976 fused_ordering(595) 00:15:36.976 fused_ordering(596) 00:15:36.976 fused_ordering(597) 00:15:36.976 fused_ordering(598) 00:15:36.976 fused_ordering(599) 00:15:36.976 fused_ordering(600) 00:15:36.976 fused_ordering(601) 00:15:36.976 fused_ordering(602) 00:15:36.976 fused_ordering(603) 00:15:36.976 fused_ordering(604) 00:15:36.976 fused_ordering(605) 00:15:36.976 fused_ordering(606) 00:15:36.976 fused_ordering(607) 00:15:36.976 fused_ordering(608) 00:15:36.976 fused_ordering(609) 00:15:36.976 fused_ordering(610) 00:15:36.976 fused_ordering(611) 00:15:36.976 fused_ordering(612) 00:15:36.976 fused_ordering(613) 00:15:36.976 fused_ordering(614) 00:15:36.976 fused_ordering(615) 00:15:37.234 fused_ordering(616) 00:15:37.234 fused_ordering(617) 00:15:37.234 fused_ordering(618) 00:15:37.234 fused_ordering(619) 00:15:37.234 fused_ordering(620) 00:15:37.234 fused_ordering(621) 00:15:37.234 fused_ordering(622) 00:15:37.234 fused_ordering(623) 00:15:37.234 fused_ordering(624) 00:15:37.234 fused_ordering(625) 00:15:37.234 fused_ordering(626) 00:15:37.234 fused_ordering(627) 00:15:37.234 fused_ordering(628) 00:15:37.234 fused_ordering(629) 00:15:37.234 fused_ordering(630) 00:15:37.234 fused_ordering(631) 00:15:37.234 fused_ordering(632) 00:15:37.234 fused_ordering(633) 00:15:37.234 fused_ordering(634) 00:15:37.234 fused_ordering(635) 00:15:37.234 fused_ordering(636) 00:15:37.234 fused_ordering(637) 00:15:37.234 fused_ordering(638) 00:15:37.234 fused_ordering(639) 00:15:37.234 fused_ordering(640) 00:15:37.234 fused_ordering(641) 00:15:37.234 fused_ordering(642) 00:15:37.234 fused_ordering(643) 00:15:37.234 fused_ordering(644) 00:15:37.234 fused_ordering(645) 00:15:37.234 fused_ordering(646) 00:15:37.234 fused_ordering(647) 00:15:37.234 fused_ordering(648) 00:15:37.234 fused_ordering(649) 00:15:37.234 fused_ordering(650) 00:15:37.234 fused_ordering(651) 00:15:37.234 fused_ordering(652) 00:15:37.234 fused_ordering(653) 00:15:37.234 fused_ordering(654) 00:15:37.235 fused_ordering(655) 00:15:37.235 fused_ordering(656) 00:15:37.235 fused_ordering(657) 00:15:37.235 fused_ordering(658) 00:15:37.235 fused_ordering(659) 00:15:37.235 fused_ordering(660) 00:15:37.235 fused_ordering(661) 00:15:37.235 fused_ordering(662) 00:15:37.235 fused_ordering(663) 00:15:37.235 fused_ordering(664) 00:15:37.235 fused_ordering(665) 00:15:37.235 fused_ordering(666) 00:15:37.235 fused_ordering(667) 00:15:37.235 fused_ordering(668) 00:15:37.235 fused_ordering(669) 00:15:37.235 fused_ordering(670) 00:15:37.235 fused_ordering(671) 00:15:37.235 fused_ordering(672) 00:15:37.235 fused_ordering(673) 00:15:37.235 fused_ordering(674) 00:15:37.235 fused_ordering(675) 00:15:37.235 fused_ordering(676) 00:15:37.235 fused_ordering(677) 00:15:37.235 fused_ordering(678) 00:15:37.235 fused_ordering(679) 00:15:37.235 fused_ordering(680) 00:15:37.235 fused_ordering(681) 00:15:37.235 fused_ordering(682) 00:15:37.235 fused_ordering(683) 00:15:37.235 fused_ordering(684) 00:15:37.235 fused_ordering(685) 00:15:37.235 fused_ordering(686) 00:15:37.235 fused_ordering(687) 00:15:37.235 fused_ordering(688) 00:15:37.235 fused_ordering(689) 00:15:37.235 fused_ordering(690) 00:15:37.235 fused_ordering(691) 00:15:37.235 fused_ordering(692) 00:15:37.235 fused_ordering(693) 00:15:37.235 fused_ordering(694) 00:15:37.235 fused_ordering(695) 00:15:37.235 fused_ordering(696) 00:15:37.235 fused_ordering(697) 00:15:37.235 fused_ordering(698) 00:15:37.235 fused_ordering(699) 00:15:37.235 fused_ordering(700) 00:15:37.235 fused_ordering(701) 00:15:37.235 fused_ordering(702) 00:15:37.235 fused_ordering(703) 00:15:37.235 fused_ordering(704) 00:15:37.235 fused_ordering(705) 00:15:37.235 fused_ordering(706) 00:15:37.235 fused_ordering(707) 00:15:37.235 fused_ordering(708) 00:15:37.235 fused_ordering(709) 00:15:37.235 fused_ordering(710) 00:15:37.235 fused_ordering(711) 00:15:37.235 fused_ordering(712) 00:15:37.235 fused_ordering(713) 00:15:37.235 fused_ordering(714) 00:15:37.235 fused_ordering(715) 00:15:37.235 fused_ordering(716) 00:15:37.235 fused_ordering(717) 00:15:37.235 fused_ordering(718) 00:15:37.235 fused_ordering(719) 00:15:37.235 fused_ordering(720) 00:15:37.235 fused_ordering(721) 00:15:37.235 fused_ordering(722) 00:15:37.235 fused_ordering(723) 00:15:37.235 fused_ordering(724) 00:15:37.235 fused_ordering(725) 00:15:37.235 fused_ordering(726) 00:15:37.235 fused_ordering(727) 00:15:37.235 fused_ordering(728) 00:15:37.235 fused_ordering(729) 00:15:37.235 fused_ordering(730) 00:15:37.235 fused_ordering(731) 00:15:37.235 fused_ordering(732) 00:15:37.235 fused_ordering(733) 00:15:37.235 fused_ordering(734) 00:15:37.235 fused_ordering(735) 00:15:37.235 fused_ordering(736) 00:15:37.235 fused_ordering(737) 00:15:37.235 fused_ordering(738) 00:15:37.235 fused_ordering(739) 00:15:37.235 fused_ordering(740) 00:15:37.235 fused_ordering(741) 00:15:37.235 fused_ordering(742) 00:15:37.235 fused_ordering(743) 00:15:37.235 fused_ordering(744) 00:15:37.235 fused_ordering(745) 00:15:37.235 fused_ordering(746) 00:15:37.235 fused_ordering(747) 00:15:37.235 fused_ordering(748) 00:15:37.235 fused_ordering(749) 00:15:37.235 fused_ordering(750) 00:15:37.235 fused_ordering(751) 00:15:37.235 fused_ordering(752) 00:15:37.235 fused_ordering(753) 00:15:37.235 fused_ordering(754) 00:15:37.235 fused_ordering(755) 00:15:37.235 fused_ordering(756) 00:15:37.235 fused_ordering(757) 00:15:37.235 fused_ordering(758) 00:15:37.235 fused_ordering(759) 00:15:37.235 fused_ordering(760) 00:15:37.235 fused_ordering(761) 00:15:37.235 fused_ordering(762) 00:15:37.235 fused_ordering(763) 00:15:37.235 fused_ordering(764) 00:15:37.235 fused_ordering(765) 00:15:37.235 fused_ordering(766) 00:15:37.235 fused_ordering(767) 00:15:37.235 fused_ordering(768) 00:15:37.235 fused_ordering(769) 00:15:37.235 fused_ordering(770) 00:15:37.235 fused_ordering(771) 00:15:37.235 fused_ordering(772) 00:15:37.235 fused_ordering(773) 00:15:37.235 fused_ordering(774) 00:15:37.235 fused_ordering(775) 00:15:37.235 fused_ordering(776) 00:15:37.235 fused_ordering(777) 00:15:37.235 fused_ordering(778) 00:15:37.235 fused_ordering(779) 00:15:37.235 fused_ordering(780) 00:15:37.235 fused_ordering(781) 00:15:37.235 fused_ordering(782) 00:15:37.235 fused_ordering(783) 00:15:37.235 fused_ordering(784) 00:15:37.235 fused_ordering(785) 00:15:37.235 fused_ordering(786) 00:15:37.235 fused_ordering(787) 00:15:37.235 fused_ordering(788) 00:15:37.235 fused_ordering(789) 00:15:37.235 fused_ordering(790) 00:15:37.235 fused_ordering(791) 00:15:37.235 fused_ordering(792) 00:15:37.235 fused_ordering(793) 00:15:37.235 fused_ordering(794) 00:15:37.235 fused_ordering(795) 00:15:37.235 fused_ordering(796) 00:15:37.235 fused_ordering(797) 00:15:37.235 fused_ordering(798) 00:15:37.235 fused_ordering(799) 00:15:37.235 fused_ordering(800) 00:15:37.235 fused_ordering(801) 00:15:37.235 fused_ordering(802) 00:15:37.235 fused_ordering(803) 00:15:37.235 fused_ordering(804) 00:15:37.235 fused_ordering(805) 00:15:37.235 fused_ordering(806) 00:15:37.235 fused_ordering(807) 00:15:37.235 fused_ordering(808) 00:15:37.235 fused_ordering(809) 00:15:37.235 fused_ordering(810) 00:15:37.235 fused_ordering(811) 00:15:37.235 fused_ordering(812) 00:15:37.235 fused_ordering(813) 00:15:37.235 fused_ordering(814) 00:15:37.235 fused_ordering(815) 00:15:37.235 fused_ordering(816) 00:15:37.235 fused_ordering(817) 00:15:37.235 fused_ordering(818) 00:15:37.235 fused_ordering(819) 00:15:37.235 fused_ordering(820) 00:15:37.800 fused_ordering(821) 00:15:37.800 fused_ordering(822) 00:15:37.800 fused_ordering(823) 00:15:37.800 fused_ordering(824) 00:15:37.800 fused_ordering(825) 00:15:37.800 fused_ordering(826) 00:15:37.800 fused_ordering(827) 00:15:37.800 fused_ordering(828) 00:15:37.800 fused_ordering(829) 00:15:37.800 fused_ordering(830) 00:15:37.800 fused_ordering(831) 00:15:37.800 fused_ordering(832) 00:15:37.800 fused_ordering(833) 00:15:37.800 fused_ordering(834) 00:15:37.800 fused_ordering(835) 00:15:37.800 fused_ordering(836) 00:15:37.800 fused_ordering(837) 00:15:37.800 fused_ordering(838) 00:15:37.800 fused_ordering(839) 00:15:37.800 fused_ordering(840) 00:15:37.800 fused_ordering(841) 00:15:37.800 fused_ordering(842) 00:15:37.800 fused_ordering(843) 00:15:37.800 fused_ordering(844) 00:15:37.800 fused_ordering(845) 00:15:37.800 fused_ordering(846) 00:15:37.800 fused_ordering(847) 00:15:37.800 fused_ordering(848) 00:15:37.800 fused_ordering(849) 00:15:37.800 fused_ordering(850) 00:15:37.800 fused_ordering(851) 00:15:37.800 fused_ordering(852) 00:15:37.800 fused_ordering(853) 00:15:37.800 fused_ordering(854) 00:15:37.800 fused_ordering(855) 00:15:37.800 fused_ordering(856) 00:15:37.800 fused_ordering(857) 00:15:37.800 fused_ordering(858) 00:15:37.800 fused_ordering(859) 00:15:37.800 fused_ordering(860) 00:15:37.800 fused_ordering(861) 00:15:37.801 fused_ordering(862) 00:15:37.801 fused_ordering(863) 00:15:37.801 fused_ordering(864) 00:15:37.801 fused_ordering(865) 00:15:37.801 fused_ordering(866) 00:15:37.801 fused_ordering(867) 00:15:37.801 fused_ordering(868) 00:15:37.801 fused_ordering(869) 00:15:37.801 fused_ordering(870) 00:15:37.801 fused_ordering(871) 00:15:37.801 fused_ordering(872) 00:15:37.801 fused_ordering(873) 00:15:37.801 fused_ordering(874) 00:15:37.801 fused_ordering(875) 00:15:37.801 fused_ordering(876) 00:15:37.801 fused_ordering(877) 00:15:37.801 fused_ordering(878) 00:15:37.801 fused_ordering(879) 00:15:37.801 fused_ordering(880) 00:15:37.801 fused_ordering(881) 00:15:37.801 fused_ordering(882) 00:15:37.801 fused_ordering(883) 00:15:37.801 fused_ordering(884) 00:15:37.801 fused_ordering(885) 00:15:37.801 fused_ordering(886) 00:15:37.801 fused_ordering(887) 00:15:37.801 fused_ordering(888) 00:15:37.801 fused_ordering(889) 00:15:37.801 fused_ordering(890) 00:15:37.801 fused_ordering(891) 00:15:37.801 fused_ordering(892) 00:15:37.801 fused_ordering(893) 00:15:37.801 fused_ordering(894) 00:15:37.801 fused_ordering(895) 00:15:37.801 fused_ordering(896) 00:15:37.801 fused_ordering(897) 00:15:37.801 fused_ordering(898) 00:15:37.801 fused_ordering(899) 00:15:37.801 fused_ordering(900) 00:15:37.801 fused_ordering(901) 00:15:37.801 fused_ordering(902) 00:15:37.801 fused_ordering(903) 00:15:37.801 fused_ordering(904) 00:15:37.801 fused_ordering(905) 00:15:37.801 fused_ordering(906) 00:15:37.801 fused_ordering(907) 00:15:37.801 fused_ordering(908) 00:15:37.801 fused_ordering(909) 00:15:37.801 fused_ordering(910) 00:15:37.801 fused_ordering(911) 00:15:37.801 fused_ordering(912) 00:15:37.801 fused_ordering(913) 00:15:37.801 fused_ordering(914) 00:15:37.801 fused_ordering(915) 00:15:37.801 fused_ordering(916) 00:15:37.801 fused_ordering(917) 00:15:37.801 fused_ordering(918) 00:15:37.801 fused_ordering(919) 00:15:37.801 fused_ordering(920) 00:15:37.801 fused_ordering(921) 00:15:37.801 fused_ordering(922) 00:15:37.801 fused_ordering(923) 00:15:37.801 fused_ordering(924) 00:15:37.801 fused_ordering(925) 00:15:37.801 fused_ordering(926) 00:15:37.801 fused_ordering(927) 00:15:37.801 fused_ordering(928) 00:15:37.801 fused_ordering(929) 00:15:37.801 fused_ordering(930) 00:15:37.801 fused_ordering(931) 00:15:37.801 fused_ordering(932) 00:15:37.801 fused_ordering(933) 00:15:37.801 fused_ordering(934) 00:15:37.801 fused_ordering(935) 00:15:37.801 fused_ordering(936) 00:15:37.801 fused_ordering(937) 00:15:37.801 fused_ordering(938) 00:15:37.801 fused_ordering(939) 00:15:37.801 fused_ordering(940) 00:15:37.801 fused_ordering(941) 00:15:37.801 fused_ordering(942) 00:15:37.801 fused_ordering(943) 00:15:37.801 fused_ordering(944) 00:15:37.801 fused_ordering(945) 00:15:37.801 fused_ordering(946) 00:15:37.801 fused_ordering(947) 00:15:37.801 fused_ordering(948) 00:15:37.801 fused_ordering(949) 00:15:37.801 fused_ordering(950) 00:15:37.801 fused_ordering(951) 00:15:37.801 fused_ordering(952) 00:15:37.801 fused_ordering(953) 00:15:37.801 fused_ordering(954) 00:15:37.801 fused_ordering(955) 00:15:37.801 fused_ordering(956) 00:15:37.801 fused_ordering(957) 00:15:37.801 fused_ordering(958) 00:15:37.801 fused_ordering(959) 00:15:37.801 fused_ordering(960) 00:15:37.801 fused_ordering(961) 00:15:37.801 fused_ordering(962) 00:15:37.801 fused_ordering(963) 00:15:37.801 fused_ordering(964) 00:15:37.801 fused_ordering(965) 00:15:37.801 fused_ordering(966) 00:15:37.801 fused_ordering(967) 00:15:37.801 fused_ordering(968) 00:15:37.801 fused_ordering(969) 00:15:37.801 fused_ordering(970) 00:15:37.801 fused_ordering(971) 00:15:37.801 fused_ordering(972) 00:15:37.801 fused_ordering(973) 00:15:37.801 fused_ordering(974) 00:15:37.801 fused_ordering(975) 00:15:37.801 fused_ordering(976) 00:15:37.801 fused_ordering(977) 00:15:37.801 fused_ordering(978) 00:15:37.801 fused_ordering(979) 00:15:37.801 fused_ordering(980) 00:15:37.801 fused_ordering(981) 00:15:37.801 fused_ordering(982) 00:15:37.801 fused_ordering(983) 00:15:37.801 fused_ordering(984) 00:15:37.801 fused_ordering(985) 00:15:37.801 fused_ordering(986) 00:15:37.801 fused_ordering(987) 00:15:37.801 fused_ordering(988) 00:15:37.801 fused_ordering(989) 00:15:37.801 fused_ordering(990) 00:15:37.801 fused_ordering(991) 00:15:37.801 fused_ordering(992) 00:15:37.801 fused_ordering(993) 00:15:37.801 fused_ordering(994) 00:15:37.801 fused_ordering(995) 00:15:37.801 fused_ordering(996) 00:15:37.801 fused_ordering(997) 00:15:37.801 fused_ordering(998) 00:15:37.801 fused_ordering(999) 00:15:37.801 fused_ordering(1000) 00:15:37.801 fused_ordering(1001) 00:15:37.801 fused_ordering(1002) 00:15:37.801 fused_ordering(1003) 00:15:37.801 fused_ordering(1004) 00:15:37.801 fused_ordering(1005) 00:15:37.801 fused_ordering(1006) 00:15:37.801 fused_ordering(1007) 00:15:37.801 fused_ordering(1008) 00:15:37.801 fused_ordering(1009) 00:15:37.801 fused_ordering(1010) 00:15:37.801 fused_ordering(1011) 00:15:37.801 fused_ordering(1012) 00:15:37.801 fused_ordering(1013) 00:15:37.801 fused_ordering(1014) 00:15:37.801 fused_ordering(1015) 00:15:37.801 fused_ordering(1016) 00:15:37.801 fused_ordering(1017) 00:15:37.801 fused_ordering(1018) 00:15:37.801 fused_ordering(1019) 00:15:37.801 fused_ordering(1020) 00:15:37.801 fused_ordering(1021) 00:15:37.801 fused_ordering(1022) 00:15:37.801 fused_ordering(1023) 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.801 17:16:23 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.801 rmmod nvme_tcp 00:15:37.801 rmmod nvme_fabrics 00:15:37.801 rmmod nvme_keyring 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 76227 ']' 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 76227 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 76227 ']' 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 76227 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:37.801 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:38.059 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76227 00:15:38.059 killing process with pid 76227 00:15:38.059 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76227' 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 76227 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 76227 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.060 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:38.319 00:15:38.319 real 0m3.998s 00:15:38.319 user 0m4.751s 00:15:38.319 sys 0m1.366s 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:38.319 ************************************ 00:15:38.319 END TEST nvmf_fused_ordering 00:15:38.319 ************************************ 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:38.319 ************************************ 00:15:38.319 START TEST nvmf_ns_masking 00:15:38.319 ************************************ 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:38.319 * Looking for test storage... 00:15:38.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:38.319 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=8860a018-e032-4f55-baeb-caa698a0264d 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2e2dcc63-275a-4595-bd59-7d03026b62ec 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cadf4ff2-e29b-4ce7-afa4-96e5bab29bf0 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:38.320 Cannot find device "nvmf_tgt_br" 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:38.320 Cannot find device "nvmf_tgt_br2" 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:38.320 Cannot find device "nvmf_tgt_br" 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:15:38.320 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:38.579 Cannot find device "nvmf_tgt_br2" 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:38.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:38.579 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:38.579 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:38.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:15:38.838 00:15:38.838 --- 10.0.0.2 ping statistics --- 00:15:38.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.838 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:38.838 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:38.838 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:15:38.838 00:15:38.838 --- 10.0.0.3 ping statistics --- 00:15:38.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.838 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:38.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:38.838 00:15:38.838 --- 10.0.0.1 ping statistics --- 00:15:38.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.838 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=76489 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 76489 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76489 ']' 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.838 17:16:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:38.838 [2024-07-24 17:16:24.908632] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:38.838 [2024-07-24 17:16:24.908735] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.838 [2024-07-24 17:16:25.045319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.096 [2024-07-24 17:16:25.157242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:39.096 [2024-07-24 17:16:25.157324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:39.096 [2024-07-24 17:16:25.157343] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:39.096 [2024-07-24 17:16:25.157356] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:39.096 [2024-07-24 17:16:25.157368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:39.096 [2024-07-24 17:16:25.157405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:40.032 17:16:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:40.032 [2024-07-24 17:16:26.218316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:40.032 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:40.032 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:40.032 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:40.289 Malloc1 00:15:40.290 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:40.856 Malloc2 00:15:40.856 17:16:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:40.856 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:41.115 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:41.373 [2024-07-24 17:16:27.555205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:41.373 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:41.373 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cadf4ff2-e29b-4ce7-afa4-96e5bab29bf0 -a 10.0.0.2 -s 4420 -i 4 00:15:41.701 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:41.701 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.701 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.701 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:41.701 17:16:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.605 [ 0]:0x1 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62382ad9c664429bac8bfabf323bc9dc 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62382ad9c664429bac8bfabf323bc9dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.605 17:16:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:44.173 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:44.173 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.173 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.174 [ 0]:0x1 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62382ad9c664429bac8bfabf323bc9dc 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62382ad9c664429bac8bfabf323bc9dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.174 [ 1]:0x2 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.174 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:44.432 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:44.691 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:44.691 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cadf4ff2-e29b-4ce7-afa4-96e5bab29bf0 -a 10.0.0.2 -s 4420 -i 4 00:15:44.950 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:44.950 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:44.950 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.950 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:44.950 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:44.950 17:16:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:46.896 17:16:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:46.896 [ 0]:0x2 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:46.896 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.154 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:47.154 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.154 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.411 [ 0]:0x1 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62382ad9c664429bac8bfabf323bc9dc 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62382ad9c664429bac8bfabf323bc9dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.411 [ 1]:0x2 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.411 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.670 [ 0]:0x2 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.670 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.929 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:47.929 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.929 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:47.929 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.929 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.929 17:16:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cadf4ff2-e29b-4ce7-afa4-96e5bab29bf0 -a 10.0.0.2 -s 4420 -i 4 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:48.188 17:16:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.719 [ 0]:0x1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=62382ad9c664429bac8bfabf323bc9dc 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 62382ad9c664429bac8bfabf323bc9dc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.719 [ 1]:0x2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.719 [ 0]:0x2 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:50.719 17:16:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:50.976 [2024-07-24 17:16:37.202163] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:50.976 2024/07/24 17:16:37 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:15:50.976 request: 00:15:50.976 { 00:15:50.976 "method": "nvmf_ns_remove_host", 00:15:50.976 "params": { 00:15:50.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:50.976 "nsid": 2, 00:15:50.976 "host": "nqn.2016-06.io.spdk:host1" 00:15:50.976 } 00:15:50.976 } 00:15:50.976 Got JSON-RPC error response 00:15:50.976 GoRPCClient: error on JSON-RPC call 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.236 [ 0]:0x2 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=14b33c8027bf4f889256cada6374c011 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 14b33c8027bf4f889256cada6374c011 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.236 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=76876 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 76876 /var/tmp/host.sock 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 76876 ']' 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:51.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:51.236 17:16:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:51.495 [2024-07-24 17:16:37.472834] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:15:51.495 [2024-07-24 17:16:37.472943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76876 ] 00:15:51.495 [2024-07-24 17:16:37.614814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.752 [2024-07-24 17:16:37.747430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:52.319 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.319 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:52.319 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:52.577 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:52.835 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 8860a018-e032-4f55-baeb-caa698a0264d 00:15:52.835 17:16:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:52.835 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 8860A018E0324F55BAEBCAA698A0264D -i 00:15:53.094 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2e2dcc63-275a-4595-bd59-7d03026b62ec 00:15:53.094 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:53.094 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2E2DCC63275A4595BD597D03026B62EC -i 00:15:53.352 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:53.610 17:16:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:53.868 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:53.868 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:54.126 nvme0n1 00:15:54.126 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:54.126 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:54.692 nvme1n2 00:15:54.692 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:54.692 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:54.692 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:54.692 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:54.692 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:54.950 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:54.950 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:54.950 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:54.950 17:16:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:55.208 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 8860a018-e032-4f55-baeb-caa698a0264d == \8\8\6\0\a\0\1\8\-\e\0\3\2\-\4\f\5\5\-\b\a\e\b\-\c\a\a\6\9\8\a\0\2\6\4\d ]] 00:15:55.208 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:55.208 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:55.208 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2e2dcc63-275a-4595-bd59-7d03026b62ec == \2\e\2\d\c\c\6\3\-\2\7\5\a\-\4\5\9\5\-\b\d\5\9\-\7\d\0\3\0\2\6\b\6\2\e\c ]] 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 76876 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76876 ']' 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76876 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76876 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76876' 00:15:55.467 killing process with pid 76876 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76876 00:15:55.467 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76876 00:15:56.033 17:16:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.033 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.033 rmmod nvme_tcp 00:15:56.033 rmmod nvme_fabrics 00:15:56.291 rmmod nvme_keyring 00:15:56.291 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.291 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:15:56.291 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:15:56.291 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 76489 ']' 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 76489 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 76489 ']' 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 76489 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76489 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.292 killing process with pid 76489 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76489' 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 76489 00:15:56.292 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 76489 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.550 00:15:56.550 real 0m18.289s 00:15:56.550 user 0m29.044s 00:15:56.550 sys 0m2.882s 00:15:56.550 ************************************ 00:15:56.550 END TEST nvmf_ns_masking 00:15:56.550 ************************************ 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.550 ************************************ 00:15:56.550 START TEST nvmf_auth_target 00:15:56.550 ************************************ 00:15:56.550 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:56.550 * Looking for test storage... 00:15:56.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.809 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.810 Cannot find device "nvmf_tgt_br" 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # true 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.810 Cannot find device "nvmf_tgt_br2" 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # true 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.810 Cannot find device "nvmf_tgt_br" 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # true 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.810 Cannot find device "nvmf_tgt_br2" 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # true 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.810 17:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.810 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.810 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.810 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.810 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:57.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:15:57.069 00:15:57.069 --- 10.0.0.2 ping statistics --- 00:15:57.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.069 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:57.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:57.069 00:15:57.069 --- 10.0.0.3 ping statistics --- 00:15:57.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.069 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:15:57.069 00:15:57.069 --- 10.0.0.1 ping statistics --- 00:15:57.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.069 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@433 -- # return 0 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:57.069 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=77233 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 77233 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77233 ']' 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.070 17:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=77277 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87bf5c7be8171518ff7d32bc4d4790e8ee289ca892c9ba48 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Cuu 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87bf5c7be8171518ff7d32bc4d4790e8ee289ca892c9ba48 0 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87bf5c7be8171518ff7d32bc4d4790e8ee289ca892c9ba48 0 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87bf5c7be8171518ff7d32bc4d4790e8ee289ca892c9ba48 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:15:58.050 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Cuu 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Cuu 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Cuu 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cc964ee99a3696114b8d419dc28a817db472cf3de570b2b37256b4457bc462b2 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Xq 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cc964ee99a3696114b8d419dc28a817db472cf3de570b2b37256b4457bc462b2 3 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cc964ee99a3696114b8d419dc28a817db472cf3de570b2b37256b4457bc462b2 3 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cc964ee99a3696114b8d419dc28a817db472cf3de570b2b37256b4457bc462b2 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Xq 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Xq 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.2Xq 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9620872b63de489db6076bd3a72f954 00:15:58.309 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QmQ 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9620872b63de489db6076bd3a72f954 1 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9620872b63de489db6076bd3a72f954 1 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9620872b63de489db6076bd3a72f954 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QmQ 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QmQ 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.QmQ 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cb327a144bfdcde1d24fa65b5ba129be3ce7220e4cd8bd18 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.PHf 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cb327a144bfdcde1d24fa65b5ba129be3ce7220e4cd8bd18 2 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cb327a144bfdcde1d24fa65b5ba129be3ce7220e4cd8bd18 2 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cb327a144bfdcde1d24fa65b5ba129be3ce7220e4cd8bd18 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.PHf 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.PHf 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.PHf 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2199bf72c68c688a1b073fc1f32ebeddad12d0159b12f248 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.5vN 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2199bf72c68c688a1b073fc1f32ebeddad12d0159b12f248 2 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2199bf72c68c688a1b073fc1f32ebeddad12d0159b12f248 2 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2199bf72c68c688a1b073fc1f32ebeddad12d0159b12f248 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.310 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.5vN 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.5vN 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.5vN 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=052b655e784f96a7cae55f2e0cf0d494 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Cef 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 052b655e784f96a7cae55f2e0cf0d494 1 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 052b655e784f96a7cae55f2e0cf0d494 1 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=052b655e784f96a7cae55f2e0cf0d494 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Cef 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Cef 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Cef 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:58.569 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9cf94f403ab1574f4e8c38db6f6b595c7afefe110c928d372898c17d8193dc1d 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ze6 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9cf94f403ab1574f4e8c38db6f6b595c7afefe110c928d372898c17d8193dc1d 3 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9cf94f403ab1574f4e8c38db6f6b595c7afefe110c928d372898c17d8193dc1d 3 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9cf94f403ab1574f4e8c38db6f6b595c7afefe110c928d372898c17d8193dc1d 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ze6 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ze6 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ze6 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 77233 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77233 ']' 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.570 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 77277 /var/tmp/host.sock 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 77277 ']' 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.828 17:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Cuu 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.087 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.088 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Cuu 00:15:59.088 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Cuu 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.2Xq ]] 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Xq 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Xq 00:15:59.346 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.2Xq 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QmQ 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QmQ 00:15:59.913 17:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QmQ 00:15:59.913 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.PHf ]] 00:15:59.913 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PHf 00:15:59.913 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.913 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PHf 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PHf 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.5vN 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.5vN 00:16:00.172 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.5vN 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Cef ]] 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cef 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cef 00:16:00.430 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cef 00:16:00.688 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:00.688 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ze6 00:16:00.689 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.689 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.689 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.689 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ze6 00:16:00.689 17:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ze6 00:16:00.947 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:00.947 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:00.947 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:00.947 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.947 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:00.947 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.237 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.495 00:16:01.495 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.495 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.495 17:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.062 { 00:16:02.062 "auth": { 00:16:02.062 "dhgroup": "null", 00:16:02.062 "digest": "sha256", 00:16:02.062 "state": "completed" 00:16:02.062 }, 00:16:02.062 "cntlid": 1, 00:16:02.062 "listen_address": { 00:16:02.062 "adrfam": "IPv4", 00:16:02.062 "traddr": "10.0.0.2", 00:16:02.062 "trsvcid": "4420", 00:16:02.062 "trtype": "TCP" 00:16:02.062 }, 00:16:02.062 "peer_address": { 00:16:02.062 "adrfam": "IPv4", 00:16:02.062 "traddr": "10.0.0.1", 00:16:02.062 "trsvcid": "45734", 00:16:02.062 "trtype": "TCP" 00:16:02.062 }, 00:16:02.062 "qid": 0, 00:16:02.062 "state": "enabled", 00:16:02.062 "thread": "nvmf_tgt_poll_group_000" 00:16:02.062 } 00:16:02.062 ]' 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.062 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.321 17:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.588 17:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.588 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.588 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.846 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.846 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.846 { 00:16:07.846 "auth": { 00:16:07.846 "dhgroup": "null", 00:16:07.846 "digest": "sha256", 00:16:07.846 "state": "completed" 00:16:07.846 }, 00:16:07.846 "cntlid": 3, 00:16:07.846 "listen_address": { 00:16:07.846 "adrfam": "IPv4", 00:16:07.847 "traddr": "10.0.0.2", 00:16:07.847 "trsvcid": "4420", 00:16:07.847 "trtype": "TCP" 00:16:07.847 }, 00:16:07.847 "peer_address": { 00:16:07.847 "adrfam": "IPv4", 00:16:07.847 "traddr": "10.0.0.1", 00:16:07.847 "trsvcid": "45762", 00:16:07.847 "trtype": "TCP" 00:16:07.847 }, 00:16:07.847 "qid": 0, 00:16:07.847 "state": "enabled", 00:16:07.847 "thread": "nvmf_tgt_poll_group_000" 00:16:07.847 } 00:16:07.847 ]' 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.847 17:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.105 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.041 17:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.041 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.610 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.610 { 00:16:09.610 "auth": { 00:16:09.610 "dhgroup": "null", 00:16:09.610 "digest": "sha256", 00:16:09.610 "state": "completed" 00:16:09.610 }, 00:16:09.610 "cntlid": 5, 00:16:09.610 "listen_address": { 00:16:09.610 "adrfam": "IPv4", 00:16:09.610 "traddr": "10.0.0.2", 00:16:09.610 "trsvcid": "4420", 00:16:09.610 "trtype": "TCP" 00:16:09.610 }, 00:16:09.610 "peer_address": { 00:16:09.610 "adrfam": "IPv4", 00:16:09.610 "traddr": "10.0.0.1", 00:16:09.610 "trsvcid": "45780", 00:16:09.610 "trtype": "TCP" 00:16:09.610 }, 00:16:09.610 "qid": 0, 00:16:09.610 "state": "enabled", 00:16:09.610 "thread": "nvmf_tgt_poll_group_000" 00:16:09.610 } 00:16:09.610 ]' 00:16:09.610 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.869 17:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.127 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.693 17:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.952 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.519 00:16:11.519 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.519 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.519 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.778 { 00:16:11.778 "auth": { 00:16:11.778 "dhgroup": "null", 00:16:11.778 "digest": "sha256", 00:16:11.778 "state": "completed" 00:16:11.778 }, 00:16:11.778 "cntlid": 7, 00:16:11.778 "listen_address": { 00:16:11.778 "adrfam": "IPv4", 00:16:11.778 "traddr": "10.0.0.2", 00:16:11.778 "trsvcid": "4420", 00:16:11.778 "trtype": "TCP" 00:16:11.778 }, 00:16:11.778 "peer_address": { 00:16:11.778 "adrfam": "IPv4", 00:16:11.778 "traddr": "10.0.0.1", 00:16:11.778 "trsvcid": "45848", 00:16:11.778 "trtype": "TCP" 00:16:11.778 }, 00:16:11.778 "qid": 0, 00:16:11.778 "state": "enabled", 00:16:11.778 "thread": "nvmf_tgt_poll_group_000" 00:16:11.778 } 00:16:11.778 ]' 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.778 17:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.038 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.973 17:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.973 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.254 00:16:13.513 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.513 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.513 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.773 { 00:16:13.773 "auth": { 00:16:13.773 "dhgroup": "ffdhe2048", 00:16:13.773 "digest": "sha256", 00:16:13.773 "state": "completed" 00:16:13.773 }, 00:16:13.773 "cntlid": 9, 00:16:13.773 "listen_address": { 00:16:13.773 "adrfam": "IPv4", 00:16:13.773 "traddr": "10.0.0.2", 00:16:13.773 "trsvcid": "4420", 00:16:13.773 "trtype": "TCP" 00:16:13.773 }, 00:16:13.773 "peer_address": { 00:16:13.773 "adrfam": "IPv4", 00:16:13.773 "traddr": "10.0.0.1", 00:16:13.773 "trsvcid": "45876", 00:16:13.773 "trtype": "TCP" 00:16:13.773 }, 00:16:13.773 "qid": 0, 00:16:13.773 "state": "enabled", 00:16:13.773 "thread": "nvmf_tgt_poll_group_000" 00:16:13.773 } 00:16:13.773 ]' 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.773 17:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.031 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.966 17:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.225 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.483 00:16:15.483 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.483 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.483 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.741 { 00:16:15.741 "auth": { 00:16:15.741 "dhgroup": "ffdhe2048", 00:16:15.741 "digest": "sha256", 00:16:15.741 "state": "completed" 00:16:15.741 }, 00:16:15.741 "cntlid": 11, 00:16:15.741 "listen_address": { 00:16:15.741 "adrfam": "IPv4", 00:16:15.741 "traddr": "10.0.0.2", 00:16:15.741 "trsvcid": "4420", 00:16:15.741 "trtype": "TCP" 00:16:15.741 }, 00:16:15.741 "peer_address": { 00:16:15.741 "adrfam": "IPv4", 00:16:15.741 "traddr": "10.0.0.1", 00:16:15.741 "trsvcid": "45902", 00:16:15.741 "trtype": "TCP" 00:16:15.741 }, 00:16:15.741 "qid": 0, 00:16:15.741 "state": "enabled", 00:16:15.741 "thread": "nvmf_tgt_poll_group_000" 00:16:15.741 } 00:16:15.741 ]' 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.741 17:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.307 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:16.873 17:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.132 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.390 00:16:17.390 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.390 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.390 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.959 { 00:16:17.959 "auth": { 00:16:17.959 "dhgroup": "ffdhe2048", 00:16:17.959 "digest": "sha256", 00:16:17.959 "state": "completed" 00:16:17.959 }, 00:16:17.959 "cntlid": 13, 00:16:17.959 "listen_address": { 00:16:17.959 "adrfam": "IPv4", 00:16:17.959 "traddr": "10.0.0.2", 00:16:17.959 "trsvcid": "4420", 00:16:17.959 "trtype": "TCP" 00:16:17.959 }, 00:16:17.959 "peer_address": { 00:16:17.959 "adrfam": "IPv4", 00:16:17.959 "traddr": "10.0.0.1", 00:16:17.959 "trsvcid": "45932", 00:16:17.959 "trtype": "TCP" 00:16:17.959 }, 00:16:17.959 "qid": 0, 00:16:17.959 "state": "enabled", 00:16:17.959 "thread": "nvmf_tgt_poll_group_000" 00:16:17.959 } 00:16:17.959 ]' 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.959 17:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.959 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.959 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.959 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.959 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.959 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.233 17:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:16:18.804 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.804 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:18.804 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.804 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.061 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.061 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.061 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.061 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.319 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:19.577 00:16:19.577 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.577 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.577 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.835 { 00:16:19.835 "auth": { 00:16:19.835 "dhgroup": "ffdhe2048", 00:16:19.835 "digest": "sha256", 00:16:19.835 "state": "completed" 00:16:19.835 }, 00:16:19.835 "cntlid": 15, 00:16:19.835 "listen_address": { 00:16:19.835 "adrfam": "IPv4", 00:16:19.835 "traddr": "10.0.0.2", 00:16:19.835 "trsvcid": "4420", 00:16:19.835 "trtype": "TCP" 00:16:19.835 }, 00:16:19.835 "peer_address": { 00:16:19.835 "adrfam": "IPv4", 00:16:19.835 "traddr": "10.0.0.1", 00:16:19.835 "trsvcid": "45964", 00:16:19.835 "trtype": "TCP" 00:16:19.835 }, 00:16:19.835 "qid": 0, 00:16:19.835 "state": "enabled", 00:16:19.835 "thread": "nvmf_tgt_poll_group_000" 00:16:19.835 } 00:16:19.835 ]' 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.835 17:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.835 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.835 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.835 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.835 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.836 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.093 17:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.038 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:21.296 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:21.296 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.297 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.555 00:16:21.555 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.555 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.555 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.813 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.813 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.813 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.813 17:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.813 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.813 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.813 { 00:16:21.813 "auth": { 00:16:21.813 "dhgroup": "ffdhe3072", 00:16:21.813 "digest": "sha256", 00:16:21.813 "state": "completed" 00:16:21.813 }, 00:16:21.813 "cntlid": 17, 00:16:21.813 "listen_address": { 00:16:21.813 "adrfam": "IPv4", 00:16:21.813 "traddr": "10.0.0.2", 00:16:21.813 "trsvcid": "4420", 00:16:21.813 "trtype": "TCP" 00:16:21.813 }, 00:16:21.813 "peer_address": { 00:16:21.813 "adrfam": "IPv4", 00:16:21.813 "traddr": "10.0.0.1", 00:16:21.813 "trsvcid": "56710", 00:16:21.813 "trtype": "TCP" 00:16:21.813 }, 00:16:21.813 "qid": 0, 00:16:21.813 "state": "enabled", 00:16:21.813 "thread": "nvmf_tgt_poll_group_000" 00:16:21.813 } 00:16:21.813 ]' 00:16:21.813 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.071 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.072 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.072 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.072 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.072 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.072 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.072 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.330 17:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.265 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.832 00:16:23.832 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.832 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.832 17:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.121 { 00:16:24.121 "auth": { 00:16:24.121 "dhgroup": "ffdhe3072", 00:16:24.121 "digest": "sha256", 00:16:24.121 "state": "completed" 00:16:24.121 }, 00:16:24.121 "cntlid": 19, 00:16:24.121 "listen_address": { 00:16:24.121 "adrfam": "IPv4", 00:16:24.121 "traddr": "10.0.0.2", 00:16:24.121 "trsvcid": "4420", 00:16:24.121 "trtype": "TCP" 00:16:24.121 }, 00:16:24.121 "peer_address": { 00:16:24.121 "adrfam": "IPv4", 00:16:24.121 "traddr": "10.0.0.1", 00:16:24.121 "trsvcid": "56732", 00:16:24.121 "trtype": "TCP" 00:16:24.121 }, 00:16:24.121 "qid": 0, 00:16:24.121 "state": "enabled", 00:16:24.121 "thread": "nvmf_tgt_poll_group_000" 00:16:24.121 } 00:16:24.121 ]' 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.121 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.384 17:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:25.317 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.317 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.318 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.575 00:16:25.833 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.833 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.834 17:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.092 { 00:16:26.092 "auth": { 00:16:26.092 "dhgroup": "ffdhe3072", 00:16:26.092 "digest": "sha256", 00:16:26.092 "state": "completed" 00:16:26.092 }, 00:16:26.092 "cntlid": 21, 00:16:26.092 "listen_address": { 00:16:26.092 "adrfam": "IPv4", 00:16:26.092 "traddr": "10.0.0.2", 00:16:26.092 "trsvcid": "4420", 00:16:26.092 "trtype": "TCP" 00:16:26.092 }, 00:16:26.092 "peer_address": { 00:16:26.092 "adrfam": "IPv4", 00:16:26.092 "traddr": "10.0.0.1", 00:16:26.092 "trsvcid": "56756", 00:16:26.092 "trtype": "TCP" 00:16:26.092 }, 00:16:26.092 "qid": 0, 00:16:26.092 "state": "enabled", 00:16:26.092 "thread": "nvmf_tgt_poll_group_000" 00:16:26.092 } 00:16:26.092 ]' 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.092 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.351 17:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.286 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.287 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:16:27.287 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.287 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.287 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.287 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.287 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.544 00:16:27.544 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.544 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.544 17:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.110 { 00:16:28.110 "auth": { 00:16:28.110 "dhgroup": "ffdhe3072", 00:16:28.110 "digest": "sha256", 00:16:28.110 "state": "completed" 00:16:28.110 }, 00:16:28.110 "cntlid": 23, 00:16:28.110 "listen_address": { 00:16:28.110 "adrfam": "IPv4", 00:16:28.110 "traddr": "10.0.0.2", 00:16:28.110 "trsvcid": "4420", 00:16:28.110 "trtype": "TCP" 00:16:28.110 }, 00:16:28.110 "peer_address": { 00:16:28.110 "adrfam": "IPv4", 00:16:28.110 "traddr": "10.0.0.1", 00:16:28.110 "trsvcid": "56782", 00:16:28.110 "trtype": "TCP" 00:16:28.110 }, 00:16:28.110 "qid": 0, 00:16:28.110 "state": "enabled", 00:16:28.110 "thread": "nvmf_tgt_poll_group_000" 00:16:28.110 } 00:16:28.110 ]' 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.110 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.111 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.368 17:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:28.934 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:29.192 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:29.192 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.192 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.192 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.192 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.193 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.193 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.193 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.193 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.450 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.451 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.451 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.708 00:16:29.708 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.708 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.708 17:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.967 { 00:16:29.967 "auth": { 00:16:29.967 "dhgroup": "ffdhe4096", 00:16:29.967 "digest": "sha256", 00:16:29.967 "state": "completed" 00:16:29.967 }, 00:16:29.967 "cntlid": 25, 00:16:29.967 "listen_address": { 00:16:29.967 "adrfam": "IPv4", 00:16:29.967 "traddr": "10.0.0.2", 00:16:29.967 "trsvcid": "4420", 00:16:29.967 "trtype": "TCP" 00:16:29.967 }, 00:16:29.967 "peer_address": { 00:16:29.967 "adrfam": "IPv4", 00:16:29.967 "traddr": "10.0.0.1", 00:16:29.967 "trsvcid": "56798", 00:16:29.967 "trtype": "TCP" 00:16:29.967 }, 00:16:29.967 "qid": 0, 00:16:29.967 "state": "enabled", 00:16:29.967 "thread": "nvmf_tgt_poll_group_000" 00:16:29.967 } 00:16:29.967 ]' 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:29.967 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.225 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.225 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.225 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.483 17:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.049 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.326 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.604 00:16:31.862 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.862 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.862 17:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.121 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.121 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.121 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.121 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.122 { 00:16:32.122 "auth": { 00:16:32.122 "dhgroup": "ffdhe4096", 00:16:32.122 "digest": "sha256", 00:16:32.122 "state": "completed" 00:16:32.122 }, 00:16:32.122 "cntlid": 27, 00:16:32.122 "listen_address": { 00:16:32.122 "adrfam": "IPv4", 00:16:32.122 "traddr": "10.0.0.2", 00:16:32.122 "trsvcid": "4420", 00:16:32.122 "trtype": "TCP" 00:16:32.122 }, 00:16:32.122 "peer_address": { 00:16:32.122 "adrfam": "IPv4", 00:16:32.122 "traddr": "10.0.0.1", 00:16:32.122 "trsvcid": "52284", 00:16:32.122 "trtype": "TCP" 00:16:32.122 }, 00:16:32.122 "qid": 0, 00:16:32.122 "state": "enabled", 00:16:32.122 "thread": "nvmf_tgt_poll_group_000" 00:16:32.122 } 00:16:32.122 ]' 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.122 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.380 17:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.316 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.882 00:16:33.882 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.882 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.882 17:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.141 { 00:16:34.141 "auth": { 00:16:34.141 "dhgroup": "ffdhe4096", 00:16:34.141 "digest": "sha256", 00:16:34.141 "state": "completed" 00:16:34.141 }, 00:16:34.141 "cntlid": 29, 00:16:34.141 "listen_address": { 00:16:34.141 "adrfam": "IPv4", 00:16:34.141 "traddr": "10.0.0.2", 00:16:34.141 "trsvcid": "4420", 00:16:34.141 "trtype": "TCP" 00:16:34.141 }, 00:16:34.141 "peer_address": { 00:16:34.141 "adrfam": "IPv4", 00:16:34.141 "traddr": "10.0.0.1", 00:16:34.141 "trsvcid": "52300", 00:16:34.141 "trtype": "TCP" 00:16:34.141 }, 00:16:34.141 "qid": 0, 00:16:34.141 "state": "enabled", 00:16:34.141 "thread": "nvmf_tgt_poll_group_000" 00:16:34.141 } 00:16:34.141 ]' 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.141 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.400 17:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.336 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.594 17:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.854 00:16:35.854 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.854 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.854 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.421 { 00:16:36.421 "auth": { 00:16:36.421 "dhgroup": "ffdhe4096", 00:16:36.421 "digest": "sha256", 00:16:36.421 "state": "completed" 00:16:36.421 }, 00:16:36.421 "cntlid": 31, 00:16:36.421 "listen_address": { 00:16:36.421 "adrfam": "IPv4", 00:16:36.421 "traddr": "10.0.0.2", 00:16:36.421 "trsvcid": "4420", 00:16:36.421 "trtype": "TCP" 00:16:36.421 }, 00:16:36.421 "peer_address": { 00:16:36.421 "adrfam": "IPv4", 00:16:36.421 "traddr": "10.0.0.1", 00:16:36.421 "trsvcid": "52322", 00:16:36.421 "trtype": "TCP" 00:16:36.421 }, 00:16:36.421 "qid": 0, 00:16:36.421 "state": "enabled", 00:16:36.421 "thread": "nvmf_tgt_poll_group_000" 00:16:36.421 } 00:16:36.421 ]' 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.421 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.679 17:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:16:37.260 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.523 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.781 17:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.348 00:16:38.348 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.348 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.348 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.607 { 00:16:38.607 "auth": { 00:16:38.607 "dhgroup": "ffdhe6144", 00:16:38.607 "digest": "sha256", 00:16:38.607 "state": "completed" 00:16:38.607 }, 00:16:38.607 "cntlid": 33, 00:16:38.607 "listen_address": { 00:16:38.607 "adrfam": "IPv4", 00:16:38.607 "traddr": "10.0.0.2", 00:16:38.607 "trsvcid": "4420", 00:16:38.607 "trtype": "TCP" 00:16:38.607 }, 00:16:38.607 "peer_address": { 00:16:38.607 "adrfam": "IPv4", 00:16:38.607 "traddr": "10.0.0.1", 00:16:38.607 "trsvcid": "52364", 00:16:38.607 "trtype": "TCP" 00:16:38.607 }, 00:16:38.607 "qid": 0, 00:16:38.607 "state": "enabled", 00:16:38.607 "thread": "nvmf_tgt_poll_group_000" 00:16:38.607 } 00:16:38.607 ]' 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.607 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.865 17:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:39.431 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.431 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.689 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:39.689 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.689 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.689 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.689 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.689 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.690 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.948 17:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.207 00:16:40.207 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.207 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.207 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.466 { 00:16:40.466 "auth": { 00:16:40.466 "dhgroup": "ffdhe6144", 00:16:40.466 "digest": "sha256", 00:16:40.466 "state": "completed" 00:16:40.466 }, 00:16:40.466 "cntlid": 35, 00:16:40.466 "listen_address": { 00:16:40.466 "adrfam": "IPv4", 00:16:40.466 "traddr": "10.0.0.2", 00:16:40.466 "trsvcid": "4420", 00:16:40.466 "trtype": "TCP" 00:16:40.466 }, 00:16:40.466 "peer_address": { 00:16:40.466 "adrfam": "IPv4", 00:16:40.466 "traddr": "10.0.0.1", 00:16:40.466 "trsvcid": "51750", 00:16:40.466 "trtype": "TCP" 00:16:40.466 }, 00:16:40.466 "qid": 0, 00:16:40.466 "state": "enabled", 00:16:40.466 "thread": "nvmf_tgt_poll_group_000" 00:16:40.466 } 00:16:40.466 ]' 00:16:40.466 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.724 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.724 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.724 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:40.724 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.724 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.725 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.725 17:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.983 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.550 17:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.809 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.376 00:16:42.376 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.376 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.376 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.635 { 00:16:42.635 "auth": { 00:16:42.635 "dhgroup": "ffdhe6144", 00:16:42.635 "digest": "sha256", 00:16:42.635 "state": "completed" 00:16:42.635 }, 00:16:42.635 "cntlid": 37, 00:16:42.635 "listen_address": { 00:16:42.635 "adrfam": "IPv4", 00:16:42.635 "traddr": "10.0.0.2", 00:16:42.635 "trsvcid": "4420", 00:16:42.635 "trtype": "TCP" 00:16:42.635 }, 00:16:42.635 "peer_address": { 00:16:42.635 "adrfam": "IPv4", 00:16:42.635 "traddr": "10.0.0.1", 00:16:42.635 "trsvcid": "51780", 00:16:42.635 "trtype": "TCP" 00:16:42.635 }, 00:16:42.635 "qid": 0, 00:16:42.635 "state": "enabled", 00:16:42.635 "thread": "nvmf_tgt_poll_group_000" 00:16:42.635 } 00:16:42.635 ]' 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.635 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.894 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.894 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.894 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.894 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.894 17:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.152 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.719 17:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.977 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:43.977 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.977 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.977 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.977 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.977 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.978 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:16:43.978 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.978 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.978 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.978 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.978 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.544 00:16:44.544 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.544 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.544 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.878 { 00:16:44.878 "auth": { 00:16:44.878 "dhgroup": "ffdhe6144", 00:16:44.878 "digest": "sha256", 00:16:44.878 "state": "completed" 00:16:44.878 }, 00:16:44.878 "cntlid": 39, 00:16:44.878 "listen_address": { 00:16:44.878 "adrfam": "IPv4", 00:16:44.878 "traddr": "10.0.0.2", 00:16:44.878 "trsvcid": "4420", 00:16:44.878 "trtype": "TCP" 00:16:44.878 }, 00:16:44.878 "peer_address": { 00:16:44.878 "adrfam": "IPv4", 00:16:44.878 "traddr": "10.0.0.1", 00:16:44.878 "trsvcid": "51804", 00:16:44.878 "trtype": "TCP" 00:16:44.878 }, 00:16:44.878 "qid": 0, 00:16:44.878 "state": "enabled", 00:16:44.878 "thread": "nvmf_tgt_poll_group_000" 00:16:44.878 } 00:16:44.878 ]' 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.878 17:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.878 17:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.878 17:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.878 17:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.878 17:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.878 17:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.136 17:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:16:46.071 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.071 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:46.071 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.072 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.072 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.072 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.072 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.072 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.072 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.330 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.896 00:16:46.896 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.896 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.897 17:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.156 { 00:16:47.156 "auth": { 00:16:47.156 "dhgroup": "ffdhe8192", 00:16:47.156 "digest": "sha256", 00:16:47.156 "state": "completed" 00:16:47.156 }, 00:16:47.156 "cntlid": 41, 00:16:47.156 "listen_address": { 00:16:47.156 "adrfam": "IPv4", 00:16:47.156 "traddr": "10.0.0.2", 00:16:47.156 "trsvcid": "4420", 00:16:47.156 "trtype": "TCP" 00:16:47.156 }, 00:16:47.156 "peer_address": { 00:16:47.156 "adrfam": "IPv4", 00:16:47.156 "traddr": "10.0.0.1", 00:16:47.156 "trsvcid": "51834", 00:16:47.156 "trtype": "TCP" 00:16:47.156 }, 00:16:47.156 "qid": 0, 00:16:47.156 "state": "enabled", 00:16:47.156 "thread": "nvmf_tgt_poll_group_000" 00:16:47.156 } 00:16:47.156 ]' 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.156 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.720 17:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.287 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.287 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:48.545 17:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.113 00:16:49.114 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.114 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.114 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.372 { 00:16:49.372 "auth": { 00:16:49.372 "dhgroup": "ffdhe8192", 00:16:49.372 "digest": "sha256", 00:16:49.372 "state": "completed" 00:16:49.372 }, 00:16:49.372 "cntlid": 43, 00:16:49.372 "listen_address": { 00:16:49.372 "adrfam": "IPv4", 00:16:49.372 "traddr": "10.0.0.2", 00:16:49.372 "trsvcid": "4420", 00:16:49.372 "trtype": "TCP" 00:16:49.372 }, 00:16:49.372 "peer_address": { 00:16:49.372 "adrfam": "IPv4", 00:16:49.372 "traddr": "10.0.0.1", 00:16:49.372 "trsvcid": "51874", 00:16:49.372 "trtype": "TCP" 00:16:49.372 }, 00:16:49.372 "qid": 0, 00:16:49.372 "state": "enabled", 00:16:49.372 "thread": "nvmf_tgt_poll_group_000" 00:16:49.372 } 00:16:49.372 ]' 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.372 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.630 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.630 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.630 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.630 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.630 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.889 17:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:50.455 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.455 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:50.455 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.455 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.713 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.713 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.713 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.713 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.971 17:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.539 00:16:51.539 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.539 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.539 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.797 { 00:16:51.797 "auth": { 00:16:51.797 "dhgroup": "ffdhe8192", 00:16:51.797 "digest": "sha256", 00:16:51.797 "state": "completed" 00:16:51.797 }, 00:16:51.797 "cntlid": 45, 00:16:51.797 "listen_address": { 00:16:51.797 "adrfam": "IPv4", 00:16:51.797 "traddr": "10.0.0.2", 00:16:51.797 "trsvcid": "4420", 00:16:51.797 "trtype": "TCP" 00:16:51.797 }, 00:16:51.797 "peer_address": { 00:16:51.797 "adrfam": "IPv4", 00:16:51.797 "traddr": "10.0.0.1", 00:16:51.797 "trsvcid": "35590", 00:16:51.797 "trtype": "TCP" 00:16:51.797 }, 00:16:51.797 "qid": 0, 00:16:51.797 "state": "enabled", 00:16:51.797 "thread": "nvmf_tgt_poll_group_000" 00:16:51.797 } 00:16:51.797 ]' 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.797 17:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.056 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.056 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.056 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.314 17:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:52.881 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.140 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.707 00:16:53.965 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.965 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.965 17:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.223 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.223 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.223 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.223 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.223 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.223 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.223 { 00:16:54.223 "auth": { 00:16:54.223 "dhgroup": "ffdhe8192", 00:16:54.223 "digest": "sha256", 00:16:54.223 "state": "completed" 00:16:54.223 }, 00:16:54.223 "cntlid": 47, 00:16:54.223 "listen_address": { 00:16:54.223 "adrfam": "IPv4", 00:16:54.224 "traddr": "10.0.0.2", 00:16:54.224 "trsvcid": "4420", 00:16:54.224 "trtype": "TCP" 00:16:54.224 }, 00:16:54.224 "peer_address": { 00:16:54.224 "adrfam": "IPv4", 00:16:54.224 "traddr": "10.0.0.1", 00:16:54.224 "trsvcid": "35612", 00:16:54.224 "trtype": "TCP" 00:16:54.224 }, 00:16:54.224 "qid": 0, 00:16:54.224 "state": "enabled", 00:16:54.224 "thread": "nvmf_tgt_poll_group_000" 00:16:54.224 } 00:16:54.224 ]' 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.224 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.482 17:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.418 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:55.676 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:16:55.676 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.676 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:55.676 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:55.676 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.677 17:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.935 00:16:55.935 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.935 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.935 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.193 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.193 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.194 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.194 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.194 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.194 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.194 { 00:16:56.194 "auth": { 00:16:56.194 "dhgroup": "null", 00:16:56.194 "digest": "sha384", 00:16:56.194 "state": "completed" 00:16:56.194 }, 00:16:56.194 "cntlid": 49, 00:16:56.194 "listen_address": { 00:16:56.194 "adrfam": "IPv4", 00:16:56.194 "traddr": "10.0.0.2", 00:16:56.194 "trsvcid": "4420", 00:16:56.194 "trtype": "TCP" 00:16:56.194 }, 00:16:56.194 "peer_address": { 00:16:56.194 "adrfam": "IPv4", 00:16:56.194 "traddr": "10.0.0.1", 00:16:56.194 "trsvcid": "35650", 00:16:56.194 "trtype": "TCP" 00:16:56.194 }, 00:16:56.194 "qid": 0, 00:16:56.194 "state": "enabled", 00:16:56.194 "thread": "nvmf_tgt_poll_group_000" 00:16:56.194 } 00:16:56.194 ]' 00:16:56.194 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.452 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.710 17:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.274 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.531 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.532 17:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.098 00:16:58.098 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.098 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.098 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.356 { 00:16:58.356 "auth": { 00:16:58.356 "dhgroup": "null", 00:16:58.356 "digest": "sha384", 00:16:58.356 "state": "completed" 00:16:58.356 }, 00:16:58.356 "cntlid": 51, 00:16:58.356 "listen_address": { 00:16:58.356 "adrfam": "IPv4", 00:16:58.356 "traddr": "10.0.0.2", 00:16:58.356 "trsvcid": "4420", 00:16:58.356 "trtype": "TCP" 00:16:58.356 }, 00:16:58.356 "peer_address": { 00:16:58.356 "adrfam": "IPv4", 00:16:58.356 "traddr": "10.0.0.1", 00:16:58.356 "trsvcid": "35682", 00:16:58.356 "trtype": "TCP" 00:16:58.356 }, 00:16:58.356 "qid": 0, 00:16:58.356 "state": "enabled", 00:16:58.356 "thread": "nvmf_tgt_poll_group_000" 00:16:58.356 } 00:16:58.356 ]' 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.356 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.614 17:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:16:59.179 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.437 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.694 17:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.952 00:16:59.952 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.952 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.952 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.210 { 00:17:00.210 "auth": { 00:17:00.210 "dhgroup": "null", 00:17:00.210 "digest": "sha384", 00:17:00.210 "state": "completed" 00:17:00.210 }, 00:17:00.210 "cntlid": 53, 00:17:00.210 "listen_address": { 00:17:00.210 "adrfam": "IPv4", 00:17:00.210 "traddr": "10.0.0.2", 00:17:00.210 "trsvcid": "4420", 00:17:00.210 "trtype": "TCP" 00:17:00.210 }, 00:17:00.210 "peer_address": { 00:17:00.210 "adrfam": "IPv4", 00:17:00.210 "traddr": "10.0.0.1", 00:17:00.210 "trsvcid": "35714", 00:17:00.210 "trtype": "TCP" 00:17:00.210 }, 00:17:00.210 "qid": 0, 00:17:00.210 "state": "enabled", 00:17:00.210 "thread": "nvmf_tgt_poll_group_000" 00:17:00.210 } 00:17:00.210 ]' 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:00.210 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.468 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:00.468 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.468 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.468 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.468 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.726 17:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.292 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:01.550 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.551 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.551 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.551 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:01.551 17:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.117 00:17:02.117 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.117 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.117 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.375 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.375 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.375 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.375 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.375 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.375 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.375 { 00:17:02.375 "auth": { 00:17:02.376 "dhgroup": "null", 00:17:02.376 "digest": "sha384", 00:17:02.376 "state": "completed" 00:17:02.376 }, 00:17:02.376 "cntlid": 55, 00:17:02.376 "listen_address": { 00:17:02.376 "adrfam": "IPv4", 00:17:02.376 "traddr": "10.0.0.2", 00:17:02.376 "trsvcid": "4420", 00:17:02.376 "trtype": "TCP" 00:17:02.376 }, 00:17:02.376 "peer_address": { 00:17:02.376 "adrfam": "IPv4", 00:17:02.376 "traddr": "10.0.0.1", 00:17:02.376 "trsvcid": "58768", 00:17:02.376 "trtype": "TCP" 00:17:02.376 }, 00:17:02.376 "qid": 0, 00:17:02.376 "state": "enabled", 00:17:02.376 "thread": "nvmf_tgt_poll_group_000" 00:17:02.376 } 00:17:02.376 ]' 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.376 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.634 17:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:03.568 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.569 17:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.135 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.135 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.394 { 00:17:04.394 "auth": { 00:17:04.394 "dhgroup": "ffdhe2048", 00:17:04.394 "digest": "sha384", 00:17:04.394 "state": "completed" 00:17:04.394 }, 00:17:04.394 "cntlid": 57, 00:17:04.394 "listen_address": { 00:17:04.394 "adrfam": "IPv4", 00:17:04.394 "traddr": "10.0.0.2", 00:17:04.394 "trsvcid": "4420", 00:17:04.394 "trtype": "TCP" 00:17:04.394 }, 00:17:04.394 "peer_address": { 00:17:04.394 "adrfam": "IPv4", 00:17:04.394 "traddr": "10.0.0.1", 00:17:04.394 "trsvcid": "58796", 00:17:04.394 "trtype": "TCP" 00:17:04.394 }, 00:17:04.394 "qid": 0, 00:17:04.394 "state": "enabled", 00:17:04.394 "thread": "nvmf_tgt_poll_group_000" 00:17:04.394 } 00:17:04.394 ]' 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.394 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.659 17:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.594 17:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:06.162 00:17:06.162 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:06.162 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.162 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.421 { 00:17:06.421 "auth": { 00:17:06.421 "dhgroup": "ffdhe2048", 00:17:06.421 "digest": "sha384", 00:17:06.421 "state": "completed" 00:17:06.421 }, 00:17:06.421 "cntlid": 59, 00:17:06.421 "listen_address": { 00:17:06.421 "adrfam": "IPv4", 00:17:06.421 "traddr": "10.0.0.2", 00:17:06.421 "trsvcid": "4420", 00:17:06.421 "trtype": "TCP" 00:17:06.421 }, 00:17:06.421 "peer_address": { 00:17:06.421 "adrfam": "IPv4", 00:17:06.421 "traddr": "10.0.0.1", 00:17:06.421 "trsvcid": "58824", 00:17:06.421 "trtype": "TCP" 00:17:06.421 }, 00:17:06.421 "qid": 0, 00:17:06.421 "state": "enabled", 00:17:06.421 "thread": "nvmf_tgt_poll_group_000" 00:17:06.421 } 00:17:06.421 ]' 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:06.421 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.680 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.680 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.680 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.939 17:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.503 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.774 17:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.351 00:17:08.351 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.351 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.351 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.611 { 00:17:08.611 "auth": { 00:17:08.611 "dhgroup": "ffdhe2048", 00:17:08.611 "digest": "sha384", 00:17:08.611 "state": "completed" 00:17:08.611 }, 00:17:08.611 "cntlid": 61, 00:17:08.611 "listen_address": { 00:17:08.611 "adrfam": "IPv4", 00:17:08.611 "traddr": "10.0.0.2", 00:17:08.611 "trsvcid": "4420", 00:17:08.611 "trtype": "TCP" 00:17:08.611 }, 00:17:08.611 "peer_address": { 00:17:08.611 "adrfam": "IPv4", 00:17:08.611 "traddr": "10.0.0.1", 00:17:08.611 "trsvcid": "58844", 00:17:08.611 "trtype": "TCP" 00:17:08.611 }, 00:17:08.611 "qid": 0, 00:17:08.611 "state": "enabled", 00:17:08.611 "thread": "nvmf_tgt_poll_group_000" 00:17:08.611 } 00:17:08.611 ]' 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.611 17:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.870 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:09.436 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.437 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:09.437 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.437 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.695 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.695 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.695 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.695 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.953 17:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.212 00:17:10.212 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.212 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.212 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.470 { 00:17:10.470 "auth": { 00:17:10.470 "dhgroup": "ffdhe2048", 00:17:10.470 "digest": "sha384", 00:17:10.470 "state": "completed" 00:17:10.470 }, 00:17:10.470 "cntlid": 63, 00:17:10.470 "listen_address": { 00:17:10.470 "adrfam": "IPv4", 00:17:10.470 "traddr": "10.0.0.2", 00:17:10.470 "trsvcid": "4420", 00:17:10.470 "trtype": "TCP" 00:17:10.470 }, 00:17:10.470 "peer_address": { 00:17:10.470 "adrfam": "IPv4", 00:17:10.470 "traddr": "10.0.0.1", 00:17:10.470 "trsvcid": "56904", 00:17:10.470 "trtype": "TCP" 00:17:10.470 }, 00:17:10.470 "qid": 0, 00:17:10.470 "state": "enabled", 00:17:10.470 "thread": "nvmf_tgt_poll_group_000" 00:17:10.470 } 00:17:10.470 ]' 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.470 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.471 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.730 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.730 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.730 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.730 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.730 17:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.989 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:11.555 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.556 17:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.814 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.381 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.381 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:12.640 { 00:17:12.640 "auth": { 00:17:12.640 "dhgroup": "ffdhe3072", 00:17:12.640 "digest": "sha384", 00:17:12.640 "state": "completed" 00:17:12.640 }, 00:17:12.640 "cntlid": 65, 00:17:12.640 "listen_address": { 00:17:12.640 "adrfam": "IPv4", 00:17:12.640 "traddr": "10.0.0.2", 00:17:12.640 "trsvcid": "4420", 00:17:12.640 "trtype": "TCP" 00:17:12.640 }, 00:17:12.640 "peer_address": { 00:17:12.640 "adrfam": "IPv4", 00:17:12.640 "traddr": "10.0.0.1", 00:17:12.640 "trsvcid": "56934", 00:17:12.640 "trtype": "TCP" 00:17:12.640 }, 00:17:12.640 "qid": 0, 00:17:12.640 "state": "enabled", 00:17:12.640 "thread": "nvmf_tgt_poll_group_000" 00:17:12.640 } 00:17:12.640 ]' 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.640 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.898 17:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:13.465 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.723 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:13.723 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.724 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.724 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.724 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.724 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.724 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.982 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.983 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.983 17:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.983 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.983 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.983 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.241 00:17:14.241 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.241 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.241 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.500 { 00:17:14.500 "auth": { 00:17:14.500 "dhgroup": "ffdhe3072", 00:17:14.500 "digest": "sha384", 00:17:14.500 "state": "completed" 00:17:14.500 }, 00:17:14.500 "cntlid": 67, 00:17:14.500 "listen_address": { 00:17:14.500 "adrfam": "IPv4", 00:17:14.500 "traddr": "10.0.0.2", 00:17:14.500 "trsvcid": "4420", 00:17:14.500 "trtype": "TCP" 00:17:14.500 }, 00:17:14.500 "peer_address": { 00:17:14.500 "adrfam": "IPv4", 00:17:14.500 "traddr": "10.0.0.1", 00:17:14.500 "trsvcid": "56958", 00:17:14.500 "trtype": "TCP" 00:17:14.500 }, 00:17:14.500 "qid": 0, 00:17:14.500 "state": "enabled", 00:17:14.500 "thread": "nvmf_tgt_poll_group_000" 00:17:14.500 } 00:17:14.500 ]' 00:17:14.500 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.764 17:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.053 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:15.620 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.620 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:15.620 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.620 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.620 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.621 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.621 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.621 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.879 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.880 17:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:16.138 00:17:16.138 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.138 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.138 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.397 { 00:17:16.397 "auth": { 00:17:16.397 "dhgroup": "ffdhe3072", 00:17:16.397 "digest": "sha384", 00:17:16.397 "state": "completed" 00:17:16.397 }, 00:17:16.397 "cntlid": 69, 00:17:16.397 "listen_address": { 00:17:16.397 "adrfam": "IPv4", 00:17:16.397 "traddr": "10.0.0.2", 00:17:16.397 "trsvcid": "4420", 00:17:16.397 "trtype": "TCP" 00:17:16.397 }, 00:17:16.397 "peer_address": { 00:17:16.397 "adrfam": "IPv4", 00:17:16.397 "traddr": "10.0.0.1", 00:17:16.397 "trsvcid": "56984", 00:17:16.397 "trtype": "TCP" 00:17:16.397 }, 00:17:16.397 "qid": 0, 00:17:16.397 "state": "enabled", 00:17:16.397 "thread": "nvmf_tgt_poll_group_000" 00:17:16.397 } 00:17:16.397 ]' 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.397 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.655 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.655 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.655 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.655 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.655 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.655 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.914 17:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:17.480 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.480 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:17.480 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.480 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.480 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.480 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.738 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.738 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.996 17:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.996 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.996 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.996 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.254 00:17:18.254 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.254 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.254 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.512 { 00:17:18.512 "auth": { 00:17:18.512 "dhgroup": "ffdhe3072", 00:17:18.512 "digest": "sha384", 00:17:18.512 "state": "completed" 00:17:18.512 }, 00:17:18.512 "cntlid": 71, 00:17:18.512 "listen_address": { 00:17:18.512 "adrfam": "IPv4", 00:17:18.512 "traddr": "10.0.0.2", 00:17:18.512 "trsvcid": "4420", 00:17:18.512 "trtype": "TCP" 00:17:18.512 }, 00:17:18.512 "peer_address": { 00:17:18.512 "adrfam": "IPv4", 00:17:18.512 "traddr": "10.0.0.1", 00:17:18.512 "trsvcid": "57014", 00:17:18.512 "trtype": "TCP" 00:17:18.512 }, 00:17:18.512 "qid": 0, 00:17:18.512 "state": "enabled", 00:17:18.512 "thread": "nvmf_tgt_poll_group_000" 00:17:18.512 } 00:17:18.512 ]' 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.512 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.769 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.769 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.769 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.769 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.769 17:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.027 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.593 17:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.851 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:20.417 00:17:20.417 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.417 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.417 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.675 { 00:17:20.675 "auth": { 00:17:20.675 "dhgroup": "ffdhe4096", 00:17:20.675 "digest": "sha384", 00:17:20.675 "state": "completed" 00:17:20.675 }, 00:17:20.675 "cntlid": 73, 00:17:20.675 "listen_address": { 00:17:20.675 "adrfam": "IPv4", 00:17:20.675 "traddr": "10.0.0.2", 00:17:20.675 "trsvcid": "4420", 00:17:20.675 "trtype": "TCP" 00:17:20.675 }, 00:17:20.675 "peer_address": { 00:17:20.675 "adrfam": "IPv4", 00:17:20.675 "traddr": "10.0.0.1", 00:17:20.675 "trsvcid": "49662", 00:17:20.675 "trtype": "TCP" 00:17:20.675 }, 00:17:20.675 "qid": 0, 00:17:20.675 "state": "enabled", 00:17:20.675 "thread": "nvmf_tgt_poll_group_000" 00:17:20.675 } 00:17:20.675 ]' 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.675 17:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.241 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.807 17:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.067 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.327 00:17:22.327 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:22.327 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:22.327 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.585 { 00:17:22.585 "auth": { 00:17:22.585 "dhgroup": "ffdhe4096", 00:17:22.585 "digest": "sha384", 00:17:22.585 "state": "completed" 00:17:22.585 }, 00:17:22.585 "cntlid": 75, 00:17:22.585 "listen_address": { 00:17:22.585 "adrfam": "IPv4", 00:17:22.585 "traddr": "10.0.0.2", 00:17:22.585 "trsvcid": "4420", 00:17:22.585 "trtype": "TCP" 00:17:22.585 }, 00:17:22.585 "peer_address": { 00:17:22.585 "adrfam": "IPv4", 00:17:22.585 "traddr": "10.0.0.1", 00:17:22.585 "trsvcid": "49686", 00:17:22.585 "trtype": "TCP" 00:17:22.585 }, 00:17:22.585 "qid": 0, 00:17:22.585 "state": "enabled", 00:17:22.585 "thread": "nvmf_tgt_poll_group_000" 00:17:22.585 } 00:17:22.585 ]' 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.585 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.843 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.843 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.843 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.843 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.843 17:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.101 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.667 17:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.925 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.926 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.492 00:17:24.492 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.492 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.492 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.751 { 00:17:24.751 "auth": { 00:17:24.751 "dhgroup": "ffdhe4096", 00:17:24.751 "digest": "sha384", 00:17:24.751 "state": "completed" 00:17:24.751 }, 00:17:24.751 "cntlid": 77, 00:17:24.751 "listen_address": { 00:17:24.751 "adrfam": "IPv4", 00:17:24.751 "traddr": "10.0.0.2", 00:17:24.751 "trsvcid": "4420", 00:17:24.751 "trtype": "TCP" 00:17:24.751 }, 00:17:24.751 "peer_address": { 00:17:24.751 "adrfam": "IPv4", 00:17:24.751 "traddr": "10.0.0.1", 00:17:24.751 "trsvcid": "49704", 00:17:24.751 "trtype": "TCP" 00:17:24.751 }, 00:17:24.751 "qid": 0, 00:17:24.751 "state": "enabled", 00:17:24.751 "thread": "nvmf_tgt_poll_group_000" 00:17:24.751 } 00:17:24.751 ]' 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.751 17:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.009 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.945 17:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.945 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.513 00:17:26.513 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.513 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.513 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.771 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.771 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.771 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.771 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.771 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.771 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.771 { 00:17:26.771 "auth": { 00:17:26.771 "dhgroup": "ffdhe4096", 00:17:26.771 "digest": "sha384", 00:17:26.771 "state": "completed" 00:17:26.771 }, 00:17:26.771 "cntlid": 79, 00:17:26.771 "listen_address": { 00:17:26.771 "adrfam": "IPv4", 00:17:26.771 "traddr": "10.0.0.2", 00:17:26.771 "trsvcid": "4420", 00:17:26.771 "trtype": "TCP" 00:17:26.772 }, 00:17:26.772 "peer_address": { 00:17:26.772 "adrfam": "IPv4", 00:17:26.772 "traddr": "10.0.0.1", 00:17:26.772 "trsvcid": "49732", 00:17:26.772 "trtype": "TCP" 00:17:26.772 }, 00:17:26.772 "qid": 0, 00:17:26.772 "state": "enabled", 00:17:26.772 "thread": "nvmf_tgt_poll_group_000" 00:17:26.772 } 00:17:26.772 ]' 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.772 17:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.030 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.967 17:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.967 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.534 00:17:28.534 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.534 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.534 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.792 { 00:17:28.792 "auth": { 00:17:28.792 "dhgroup": "ffdhe6144", 00:17:28.792 "digest": "sha384", 00:17:28.792 "state": "completed" 00:17:28.792 }, 00:17:28.792 "cntlid": 81, 00:17:28.792 "listen_address": { 00:17:28.792 "adrfam": "IPv4", 00:17:28.792 "traddr": "10.0.0.2", 00:17:28.792 "trsvcid": "4420", 00:17:28.792 "trtype": "TCP" 00:17:28.792 }, 00:17:28.792 "peer_address": { 00:17:28.792 "adrfam": "IPv4", 00:17:28.792 "traddr": "10.0.0.1", 00:17:28.792 "trsvcid": "49752", 00:17:28.792 "trtype": "TCP" 00:17:28.792 }, 00:17:28.792 "qid": 0, 00:17:28.792 "state": "enabled", 00:17:28.792 "thread": "nvmf_tgt_poll_group_000" 00:17:28.792 } 00:17:28.792 ]' 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.792 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.793 17:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.793 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.793 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.050 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.050 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.051 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.309 17:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.876 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.450 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.709 00:17:30.709 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:30.709 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.709 17:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.275 { 00:17:31.275 "auth": { 00:17:31.275 "dhgroup": "ffdhe6144", 00:17:31.275 "digest": "sha384", 00:17:31.275 "state": "completed" 00:17:31.275 }, 00:17:31.275 "cntlid": 83, 00:17:31.275 "listen_address": { 00:17:31.275 "adrfam": "IPv4", 00:17:31.275 "traddr": "10.0.0.2", 00:17:31.275 "trsvcid": "4420", 00:17:31.275 "trtype": "TCP" 00:17:31.275 }, 00:17:31.275 "peer_address": { 00:17:31.275 "adrfam": "IPv4", 00:17:31.275 "traddr": "10.0.0.1", 00:17:31.275 "trsvcid": "57930", 00:17:31.275 "trtype": "TCP" 00:17:31.275 }, 00:17:31.275 "qid": 0, 00:17:31.275 "state": "enabled", 00:17:31.275 "thread": "nvmf_tgt_poll_group_000" 00:17:31.275 } 00:17:31.275 ]' 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.275 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.534 17:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.100 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.666 17:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.924 00:17:32.924 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.924 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.924 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.181 { 00:17:33.181 "auth": { 00:17:33.181 "dhgroup": "ffdhe6144", 00:17:33.181 "digest": "sha384", 00:17:33.181 "state": "completed" 00:17:33.181 }, 00:17:33.181 "cntlid": 85, 00:17:33.181 "listen_address": { 00:17:33.181 "adrfam": "IPv4", 00:17:33.181 "traddr": "10.0.0.2", 00:17:33.181 "trsvcid": "4420", 00:17:33.181 "trtype": "TCP" 00:17:33.181 }, 00:17:33.181 "peer_address": { 00:17:33.181 "adrfam": "IPv4", 00:17:33.181 "traddr": "10.0.0.1", 00:17:33.181 "trsvcid": "57950", 00:17:33.181 "trtype": "TCP" 00:17:33.181 }, 00:17:33.181 "qid": 0, 00:17:33.181 "state": "enabled", 00:17:33.181 "thread": "nvmf_tgt_poll_group_000" 00:17:33.181 } 00:17:33.181 ]' 00:17:33.181 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.439 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.697 17:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.262 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.520 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.779 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.779 17:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.036 00:17:35.037 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:35.037 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.037 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:35.294 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.294 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.294 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.295 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.295 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.295 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.295 { 00:17:35.295 "auth": { 00:17:35.295 "dhgroup": "ffdhe6144", 00:17:35.295 "digest": "sha384", 00:17:35.295 "state": "completed" 00:17:35.295 }, 00:17:35.295 "cntlid": 87, 00:17:35.295 "listen_address": { 00:17:35.295 "adrfam": "IPv4", 00:17:35.295 "traddr": "10.0.0.2", 00:17:35.295 "trsvcid": "4420", 00:17:35.295 "trtype": "TCP" 00:17:35.295 }, 00:17:35.295 "peer_address": { 00:17:35.295 "adrfam": "IPv4", 00:17:35.295 "traddr": "10.0.0.1", 00:17:35.295 "trsvcid": "57976", 00:17:35.295 "trtype": "TCP" 00:17:35.295 }, 00:17:35.295 "qid": 0, 00:17:35.295 "state": "enabled", 00:17:35.295 "thread": "nvmf_tgt_poll_group_000" 00:17:35.295 } 00:17:35.295 ]' 00:17:35.295 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.295 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.295 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.553 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.553 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.553 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.553 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.553 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.810 17:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.375 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.633 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.891 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.891 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.891 17:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.455 00:17:37.455 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.456 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.456 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.714 { 00:17:37.714 "auth": { 00:17:37.714 "dhgroup": "ffdhe8192", 00:17:37.714 "digest": "sha384", 00:17:37.714 "state": "completed" 00:17:37.714 }, 00:17:37.714 "cntlid": 89, 00:17:37.714 "listen_address": { 00:17:37.714 "adrfam": "IPv4", 00:17:37.714 "traddr": "10.0.0.2", 00:17:37.714 "trsvcid": "4420", 00:17:37.714 "trtype": "TCP" 00:17:37.714 }, 00:17:37.714 "peer_address": { 00:17:37.714 "adrfam": "IPv4", 00:17:37.714 "traddr": "10.0.0.1", 00:17:37.714 "trsvcid": "58006", 00:17:37.714 "trtype": "TCP" 00:17:37.714 }, 00:17:37.714 "qid": 0, 00:17:37.714 "state": "enabled", 00:17:37.714 "thread": "nvmf_tgt_poll_group_000" 00:17:37.714 } 00:17:37.714 ]' 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.714 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.972 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.972 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.972 17:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.972 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.906 17:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.906 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.183 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.183 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.183 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.748 00:17:39.748 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.748 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.748 17:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.005 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.005 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.005 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.005 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.005 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.005 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.005 { 00:17:40.005 "auth": { 00:17:40.005 "dhgroup": "ffdhe8192", 00:17:40.005 "digest": "sha384", 00:17:40.005 "state": "completed" 00:17:40.005 }, 00:17:40.005 "cntlid": 91, 00:17:40.005 "listen_address": { 00:17:40.005 "adrfam": "IPv4", 00:17:40.005 "traddr": "10.0.0.2", 00:17:40.005 "trsvcid": "4420", 00:17:40.005 "trtype": "TCP" 00:17:40.005 }, 00:17:40.005 "peer_address": { 00:17:40.005 "adrfam": "IPv4", 00:17:40.005 "traddr": "10.0.0.1", 00:17:40.005 "trsvcid": "58048", 00:17:40.005 "trtype": "TCP" 00:17:40.005 }, 00:17:40.005 "qid": 0, 00:17:40.005 "state": "enabled", 00:17:40.005 "thread": "nvmf_tgt_poll_group_000" 00:17:40.005 } 00:17:40.005 ]' 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.006 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.572 17:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.139 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.139 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.397 17:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.964 00:17:41.964 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:41.964 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:41.964 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.222 { 00:17:42.222 "auth": { 00:17:42.222 "dhgroup": "ffdhe8192", 00:17:42.222 "digest": "sha384", 00:17:42.222 "state": "completed" 00:17:42.222 }, 00:17:42.222 "cntlid": 93, 00:17:42.222 "listen_address": { 00:17:42.222 "adrfam": "IPv4", 00:17:42.222 "traddr": "10.0.0.2", 00:17:42.222 "trsvcid": "4420", 00:17:42.222 "trtype": "TCP" 00:17:42.222 }, 00:17:42.222 "peer_address": { 00:17:42.222 "adrfam": "IPv4", 00:17:42.222 "traddr": "10.0.0.1", 00:17:42.222 "trsvcid": "38174", 00:17:42.222 "trtype": "TCP" 00:17:42.222 }, 00:17:42.222 "qid": 0, 00:17:42.222 "state": "enabled", 00:17:42.222 "thread": "nvmf_tgt_poll_group_000" 00:17:42.222 } 00:17:42.222 ]' 00:17:42.222 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.480 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.738 17:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:43.358 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.358 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:43.358 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.358 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.358 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.358 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.359 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.359 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:43.618 17:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.184 00:17:44.184 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:44.184 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:44.184 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.443 { 00:17:44.443 "auth": { 00:17:44.443 "dhgroup": "ffdhe8192", 00:17:44.443 "digest": "sha384", 00:17:44.443 "state": "completed" 00:17:44.443 }, 00:17:44.443 "cntlid": 95, 00:17:44.443 "listen_address": { 00:17:44.443 "adrfam": "IPv4", 00:17:44.443 "traddr": "10.0.0.2", 00:17:44.443 "trsvcid": "4420", 00:17:44.443 "trtype": "TCP" 00:17:44.443 }, 00:17:44.443 "peer_address": { 00:17:44.443 "adrfam": "IPv4", 00:17:44.443 "traddr": "10.0.0.1", 00:17:44.443 "trsvcid": "38210", 00:17:44.443 "trtype": "TCP" 00:17:44.443 }, 00:17:44.443 "qid": 0, 00:17:44.443 "state": "enabled", 00:17:44.443 "thread": "nvmf_tgt_poll_group_000" 00:17:44.443 } 00:17:44.443 ]' 00:17:44.443 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.702 17:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.961 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:45.896 17:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.155 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.412 00:17:46.412 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.412 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.412 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.670 { 00:17:46.670 "auth": { 00:17:46.670 "dhgroup": "null", 00:17:46.670 "digest": "sha512", 00:17:46.670 "state": "completed" 00:17:46.670 }, 00:17:46.670 "cntlid": 97, 00:17:46.670 "listen_address": { 00:17:46.670 "adrfam": "IPv4", 00:17:46.670 "traddr": "10.0.0.2", 00:17:46.670 "trsvcid": "4420", 00:17:46.670 "trtype": "TCP" 00:17:46.670 }, 00:17:46.670 "peer_address": { 00:17:46.670 "adrfam": "IPv4", 00:17:46.670 "traddr": "10.0.0.1", 00:17:46.670 "trsvcid": "38220", 00:17:46.670 "trtype": "TCP" 00:17:46.670 }, 00:17:46.670 "qid": 0, 00:17:46.670 "state": "enabled", 00:17:46.670 "thread": "nvmf_tgt_poll_group_000" 00:17:46.670 } 00:17:46.670 ]' 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:46.670 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.928 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.928 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.928 17:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.185 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.818 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:47.818 17:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.076 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.335 00:17:48.335 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.335 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.335 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.593 { 00:17:48.593 "auth": { 00:17:48.593 "dhgroup": "null", 00:17:48.593 "digest": "sha512", 00:17:48.593 "state": "completed" 00:17:48.593 }, 00:17:48.593 "cntlid": 99, 00:17:48.593 "listen_address": { 00:17:48.593 "adrfam": "IPv4", 00:17:48.593 "traddr": "10.0.0.2", 00:17:48.593 "trsvcid": "4420", 00:17:48.593 "trtype": "TCP" 00:17:48.593 }, 00:17:48.593 "peer_address": { 00:17:48.593 "adrfam": "IPv4", 00:17:48.593 "traddr": "10.0.0.1", 00:17:48.593 "trsvcid": "38236", 00:17:48.593 "trtype": "TCP" 00:17:48.593 }, 00:17:48.593 "qid": 0, 00:17:48.593 "state": "enabled", 00:17:48.593 "thread": "nvmf_tgt_poll_group_000" 00:17:48.593 } 00:17:48.593 ]' 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.593 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.851 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:48.851 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.851 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.851 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.851 17:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.110 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.676 17:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.934 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.193 00:17:50.452 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.452 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.452 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.710 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.710 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.710 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.710 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.711 { 00:17:50.711 "auth": { 00:17:50.711 "dhgroup": "null", 00:17:50.711 "digest": "sha512", 00:17:50.711 "state": "completed" 00:17:50.711 }, 00:17:50.711 "cntlid": 101, 00:17:50.711 "listen_address": { 00:17:50.711 "adrfam": "IPv4", 00:17:50.711 "traddr": "10.0.0.2", 00:17:50.711 "trsvcid": "4420", 00:17:50.711 "trtype": "TCP" 00:17:50.711 }, 00:17:50.711 "peer_address": { 00:17:50.711 "adrfam": "IPv4", 00:17:50.711 "traddr": "10.0.0.1", 00:17:50.711 "trsvcid": "39308", 00:17:50.711 "trtype": "TCP" 00:17:50.711 }, 00:17:50.711 "qid": 0, 00:17:50.711 "state": "enabled", 00:17:50.711 "thread": "nvmf_tgt_poll_group_000" 00:17:50.711 } 00:17:50.711 ]' 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.711 17:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.969 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:51.915 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.915 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:51.916 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.916 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.916 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.916 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:51.916 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.916 17:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.916 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:52.482 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.482 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.740 { 00:17:52.740 "auth": { 00:17:52.740 "dhgroup": "null", 00:17:52.740 "digest": "sha512", 00:17:52.740 "state": "completed" 00:17:52.740 }, 00:17:52.740 "cntlid": 103, 00:17:52.740 "listen_address": { 00:17:52.740 "adrfam": "IPv4", 00:17:52.740 "traddr": "10.0.0.2", 00:17:52.740 "trsvcid": "4420", 00:17:52.740 "trtype": "TCP" 00:17:52.740 }, 00:17:52.740 "peer_address": { 00:17:52.740 "adrfam": "IPv4", 00:17:52.740 "traddr": "10.0.0.1", 00:17:52.740 "trsvcid": "39340", 00:17:52.740 "trtype": "TCP" 00:17:52.740 }, 00:17:52.740 "qid": 0, 00:17:52.740 "state": "enabled", 00:17:52.740 "thread": "nvmf_tgt_poll_group_000" 00:17:52.740 } 00:17:52.740 ]' 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.740 17:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.998 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:53.565 17:18:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.130 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.389 00:17:54.389 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.389 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.389 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.647 { 00:17:54.647 "auth": { 00:17:54.647 "dhgroup": "ffdhe2048", 00:17:54.647 "digest": "sha512", 00:17:54.647 "state": "completed" 00:17:54.647 }, 00:17:54.647 "cntlid": 105, 00:17:54.647 "listen_address": { 00:17:54.647 "adrfam": "IPv4", 00:17:54.647 "traddr": "10.0.0.2", 00:17:54.647 "trsvcid": "4420", 00:17:54.647 "trtype": "TCP" 00:17:54.647 }, 00:17:54.647 "peer_address": { 00:17:54.647 "adrfam": "IPv4", 00:17:54.647 "traddr": "10.0.0.1", 00:17:54.647 "trsvcid": "39364", 00:17:54.647 "trtype": "TCP" 00:17:54.647 }, 00:17:54.647 "qid": 0, 00:17:54.647 "state": "enabled", 00:17:54.647 "thread": "nvmf_tgt_poll_group_000" 00:17:54.647 } 00:17:54.647 ]' 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.647 17:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.904 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.851 17:18:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:55.851 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:55.851 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.852 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.427 00:17:56.427 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.427 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.427 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.428 { 00:17:56.428 "auth": { 00:17:56.428 "dhgroup": "ffdhe2048", 00:17:56.428 "digest": "sha512", 00:17:56.428 "state": "completed" 00:17:56.428 }, 00:17:56.428 "cntlid": 107, 00:17:56.428 "listen_address": { 00:17:56.428 "adrfam": "IPv4", 00:17:56.428 "traddr": "10.0.0.2", 00:17:56.428 "trsvcid": "4420", 00:17:56.428 "trtype": "TCP" 00:17:56.428 }, 00:17:56.428 "peer_address": { 00:17:56.428 "adrfam": "IPv4", 00:17:56.428 "traddr": "10.0.0.1", 00:17:56.428 "trsvcid": "39396", 00:17:56.428 "trtype": "TCP" 00:17:56.428 }, 00:17:56.428 "qid": 0, 00:17:56.428 "state": "enabled", 00:17:56.428 "thread": "nvmf_tgt_poll_group_000" 00:17:56.428 } 00:17:56.428 ]' 00:17:56.428 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.685 17:18:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.943 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:17:57.509 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:57.766 17:18:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.024 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.281 00:17:58.281 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.281 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.281 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.538 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.538 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.538 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.538 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.538 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.538 { 00:17:58.538 "auth": { 00:17:58.538 "dhgroup": "ffdhe2048", 00:17:58.538 "digest": "sha512", 00:17:58.538 "state": "completed" 00:17:58.539 }, 00:17:58.539 "cntlid": 109, 00:17:58.539 "listen_address": { 00:17:58.539 "adrfam": "IPv4", 00:17:58.539 "traddr": "10.0.0.2", 00:17:58.539 "trsvcid": "4420", 00:17:58.539 "trtype": "TCP" 00:17:58.539 }, 00:17:58.539 "peer_address": { 00:17:58.539 "adrfam": "IPv4", 00:17:58.539 "traddr": "10.0.0.1", 00:17:58.539 "trsvcid": "39424", 00:17:58.539 "trtype": "TCP" 00:17:58.539 }, 00:17:58.539 "qid": 0, 00:17:58.539 "state": "enabled", 00:17:58.539 "thread": "nvmf_tgt_poll_group_000" 00:17:58.539 } 00:17:58.539 ]' 00:17:58.539 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.539 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.539 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.797 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:58.797 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.797 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.797 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.797 17:18:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.055 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.621 17:18:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.879 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.455 00:18:00.455 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.456 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.456 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.456 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.456 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.456 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.456 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.714 { 00:18:00.714 "auth": { 00:18:00.714 "dhgroup": "ffdhe2048", 00:18:00.714 "digest": "sha512", 00:18:00.714 "state": "completed" 00:18:00.714 }, 00:18:00.714 "cntlid": 111, 00:18:00.714 "listen_address": { 00:18:00.714 "adrfam": "IPv4", 00:18:00.714 "traddr": "10.0.0.2", 00:18:00.714 "trsvcid": "4420", 00:18:00.714 "trtype": "TCP" 00:18:00.714 }, 00:18:00.714 "peer_address": { 00:18:00.714 "adrfam": "IPv4", 00:18:00.714 "traddr": "10.0.0.1", 00:18:00.714 "trsvcid": "49894", 00:18:00.714 "trtype": "TCP" 00:18:00.714 }, 00:18:00.714 "qid": 0, 00:18:00.714 "state": "enabled", 00:18:00.714 "thread": "nvmf_tgt_poll_group_000" 00:18:00.714 } 00:18:00.714 ]' 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.714 17:18:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.972 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:01.904 17:18:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.162 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.420 00:18:02.420 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.420 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.420 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.678 { 00:18:02.678 "auth": { 00:18:02.678 "dhgroup": "ffdhe3072", 00:18:02.678 "digest": "sha512", 00:18:02.678 "state": "completed" 00:18:02.678 }, 00:18:02.678 "cntlid": 113, 00:18:02.678 "listen_address": { 00:18:02.678 "adrfam": "IPv4", 00:18:02.678 "traddr": "10.0.0.2", 00:18:02.678 "trsvcid": "4420", 00:18:02.678 "trtype": "TCP" 00:18:02.678 }, 00:18:02.678 "peer_address": { 00:18:02.678 "adrfam": "IPv4", 00:18:02.678 "traddr": "10.0.0.1", 00:18:02.678 "trsvcid": "49914", 00:18:02.678 "trtype": "TCP" 00:18:02.678 }, 00:18:02.678 "qid": 0, 00:18:02.678 "state": "enabled", 00:18:02.678 "thread": "nvmf_tgt_poll_group_000" 00:18:02.678 } 00:18:02.678 ]' 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:02.678 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.936 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.936 17:18:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.936 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.936 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.936 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.194 17:18:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.129 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.705 00:18:04.705 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.705 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.705 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.963 { 00:18:04.963 "auth": { 00:18:04.963 "dhgroup": "ffdhe3072", 00:18:04.963 "digest": "sha512", 00:18:04.963 "state": "completed" 00:18:04.963 }, 00:18:04.963 "cntlid": 115, 00:18:04.963 "listen_address": { 00:18:04.963 "adrfam": "IPv4", 00:18:04.963 "traddr": "10.0.0.2", 00:18:04.963 "trsvcid": "4420", 00:18:04.963 "trtype": "TCP" 00:18:04.963 }, 00:18:04.963 "peer_address": { 00:18:04.963 "adrfam": "IPv4", 00:18:04.963 "traddr": "10.0.0.1", 00:18:04.963 "trsvcid": "49932", 00:18:04.963 "trtype": "TCP" 00:18:04.963 }, 00:18:04.963 "qid": 0, 00:18:04.963 "state": "enabled", 00:18:04.963 "thread": "nvmf_tgt_poll_group_000" 00:18:04.963 } 00:18:04.963 ]' 00:18:04.963 17:18:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.963 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.221 17:18:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.153 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.410 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.411 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.411 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.668 00:18:06.668 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.668 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.668 17:18:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.926 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.926 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.927 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.927 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.927 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.927 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.927 { 00:18:06.927 "auth": { 00:18:06.927 "dhgroup": "ffdhe3072", 00:18:06.927 "digest": "sha512", 00:18:06.927 "state": "completed" 00:18:06.927 }, 00:18:06.927 "cntlid": 117, 00:18:06.927 "listen_address": { 00:18:06.927 "adrfam": "IPv4", 00:18:06.927 "traddr": "10.0.0.2", 00:18:06.927 "trsvcid": "4420", 00:18:06.927 "trtype": "TCP" 00:18:06.927 }, 00:18:06.927 "peer_address": { 00:18:06.927 "adrfam": "IPv4", 00:18:06.927 "traddr": "10.0.0.1", 00:18:06.927 "trsvcid": "49972", 00:18:06.927 "trtype": "TCP" 00:18:06.927 }, 00:18:06.927 "qid": 0, 00:18:06.927 "state": "enabled", 00:18:06.927 "thread": "nvmf_tgt_poll_group_000" 00:18:06.927 } 00:18:06.927 ]' 00:18:06.927 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.185 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.442 17:18:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.007 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.281 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.863 00:18:08.863 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.863 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.863 17:18:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.863 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.863 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.863 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.863 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.863 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.863 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.863 { 00:18:08.863 "auth": { 00:18:08.863 "dhgroup": "ffdhe3072", 00:18:08.863 "digest": "sha512", 00:18:08.863 "state": "completed" 00:18:08.863 }, 00:18:08.863 "cntlid": 119, 00:18:08.863 "listen_address": { 00:18:08.863 "adrfam": "IPv4", 00:18:08.863 "traddr": "10.0.0.2", 00:18:08.863 "trsvcid": "4420", 00:18:08.863 "trtype": "TCP" 00:18:08.863 }, 00:18:08.863 "peer_address": { 00:18:08.863 "adrfam": "IPv4", 00:18:08.863 "traddr": "10.0.0.1", 00:18:08.863 "trsvcid": "49998", 00:18:08.863 "trtype": "TCP" 00:18:08.863 }, 00:18:08.863 "qid": 0, 00:18:08.863 "state": "enabled", 00:18:08.863 "thread": "nvmf_tgt_poll_group_000" 00:18:08.863 } 00:18:08.863 ]' 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.122 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.381 17:18:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:09.948 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.207 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.774 00:18:10.774 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.774 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.774 17:18:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.033 { 00:18:11.033 "auth": { 00:18:11.033 "dhgroup": "ffdhe4096", 00:18:11.033 "digest": "sha512", 00:18:11.033 "state": "completed" 00:18:11.033 }, 00:18:11.033 "cntlid": 121, 00:18:11.033 "listen_address": { 00:18:11.033 "adrfam": "IPv4", 00:18:11.033 "traddr": "10.0.0.2", 00:18:11.033 "trsvcid": "4420", 00:18:11.033 "trtype": "TCP" 00:18:11.033 }, 00:18:11.033 "peer_address": { 00:18:11.033 "adrfam": "IPv4", 00:18:11.033 "traddr": "10.0.0.1", 00:18:11.033 "trsvcid": "58040", 00:18:11.033 "trtype": "TCP" 00:18:11.033 }, 00:18:11.033 "qid": 0, 00:18:11.033 "state": "enabled", 00:18:11.033 "thread": "nvmf_tgt_poll_group_000" 00:18:11.033 } 00:18:11.033 ]' 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.033 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.599 17:18:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.218 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.477 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.735 00:18:12.735 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.735 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.735 17:18:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.993 { 00:18:12.993 "auth": { 00:18:12.993 "dhgroup": "ffdhe4096", 00:18:12.993 "digest": "sha512", 00:18:12.993 "state": "completed" 00:18:12.993 }, 00:18:12.993 "cntlid": 123, 00:18:12.993 "listen_address": { 00:18:12.993 "adrfam": "IPv4", 00:18:12.993 "traddr": "10.0.0.2", 00:18:12.993 "trsvcid": "4420", 00:18:12.993 "trtype": "TCP" 00:18:12.993 }, 00:18:12.993 "peer_address": { 00:18:12.993 "adrfam": "IPv4", 00:18:12.993 "traddr": "10.0.0.1", 00:18:12.993 "trsvcid": "58078", 00:18:12.993 "trtype": "TCP" 00:18:12.993 }, 00:18:12.993 "qid": 0, 00:18:12.993 "state": "enabled", 00:18:12.993 "thread": "nvmf_tgt_poll_group_000" 00:18:12.993 } 00:18:12.993 ]' 00:18:12.993 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.251 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.251 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.251 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.251 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.252 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.252 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.252 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.510 17:18:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.078 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.078 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.336 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.902 00:18:14.902 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.902 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.902 17:19:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.160 { 00:18:15.160 "auth": { 00:18:15.160 "dhgroup": "ffdhe4096", 00:18:15.160 "digest": "sha512", 00:18:15.160 "state": "completed" 00:18:15.160 }, 00:18:15.160 "cntlid": 125, 00:18:15.160 "listen_address": { 00:18:15.160 "adrfam": "IPv4", 00:18:15.160 "traddr": "10.0.0.2", 00:18:15.160 "trsvcid": "4420", 00:18:15.160 "trtype": "TCP" 00:18:15.160 }, 00:18:15.160 "peer_address": { 00:18:15.160 "adrfam": "IPv4", 00:18:15.160 "traddr": "10.0.0.1", 00:18:15.160 "trsvcid": "58114", 00:18:15.160 "trtype": "TCP" 00:18:15.160 }, 00:18:15.160 "qid": 0, 00:18:15.160 "state": "enabled", 00:18:15.160 "thread": "nvmf_tgt_poll_group_000" 00:18:15.160 } 00:18:15.160 ]' 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.160 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.161 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.161 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.161 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.161 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.161 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.161 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.726 17:19:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.293 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.550 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.551 17:19:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.116 00:18:17.116 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.116 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.116 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.374 { 00:18:17.374 "auth": { 00:18:17.374 "dhgroup": "ffdhe4096", 00:18:17.374 "digest": "sha512", 00:18:17.374 "state": "completed" 00:18:17.374 }, 00:18:17.374 "cntlid": 127, 00:18:17.374 "listen_address": { 00:18:17.374 "adrfam": "IPv4", 00:18:17.374 "traddr": "10.0.0.2", 00:18:17.374 "trsvcid": "4420", 00:18:17.374 "trtype": "TCP" 00:18:17.374 }, 00:18:17.374 "peer_address": { 00:18:17.374 "adrfam": "IPv4", 00:18:17.374 "traddr": "10.0.0.1", 00:18:17.374 "trsvcid": "58136", 00:18:17.374 "trtype": "TCP" 00:18:17.374 }, 00:18:17.374 "qid": 0, 00:18:17.374 "state": "enabled", 00:18:17.374 "thread": "nvmf_tgt_poll_group_000" 00:18:17.374 } 00:18:17.374 ]' 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.374 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.940 17:19:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.506 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.765 17:19:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.331 00:18:19.331 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.331 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.331 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.589 { 00:18:19.589 "auth": { 00:18:19.589 "dhgroup": "ffdhe6144", 00:18:19.589 "digest": "sha512", 00:18:19.589 "state": "completed" 00:18:19.589 }, 00:18:19.589 "cntlid": 129, 00:18:19.589 "listen_address": { 00:18:19.589 "adrfam": "IPv4", 00:18:19.589 "traddr": "10.0.0.2", 00:18:19.589 "trsvcid": "4420", 00:18:19.589 "trtype": "TCP" 00:18:19.589 }, 00:18:19.589 "peer_address": { 00:18:19.589 "adrfam": "IPv4", 00:18:19.589 "traddr": "10.0.0.1", 00:18:19.589 "trsvcid": "58158", 00:18:19.589 "trtype": "TCP" 00:18:19.589 }, 00:18:19.589 "qid": 0, 00:18:19.589 "state": "enabled", 00:18:19.589 "thread": "nvmf_tgt_poll_group_000" 00:18:19.589 } 00:18:19.589 ]' 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.589 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.845 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.845 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.845 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.845 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.845 17:19:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.102 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.037 17:19:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.037 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.603 00:18:21.603 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.603 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.603 17:19:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.862 { 00:18:21.862 "auth": { 00:18:21.862 "dhgroup": "ffdhe6144", 00:18:21.862 "digest": "sha512", 00:18:21.862 "state": "completed" 00:18:21.862 }, 00:18:21.862 "cntlid": 131, 00:18:21.862 "listen_address": { 00:18:21.862 "adrfam": "IPv4", 00:18:21.862 "traddr": "10.0.0.2", 00:18:21.862 "trsvcid": "4420", 00:18:21.862 "trtype": "TCP" 00:18:21.862 }, 00:18:21.862 "peer_address": { 00:18:21.862 "adrfam": "IPv4", 00:18:21.862 "traddr": "10.0.0.1", 00:18:21.862 "trsvcid": "43246", 00:18:21.862 "trtype": "TCP" 00:18:21.862 }, 00:18:21.862 "qid": 0, 00:18:21.862 "state": "enabled", 00:18:21.862 "thread": "nvmf_tgt_poll_group_000" 00:18:21.862 } 00:18:21.862 ]' 00:18:21.862 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.121 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.379 17:19:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.009 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.575 17:19:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.834 00:18:23.834 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.834 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.834 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.091 { 00:18:24.091 "auth": { 00:18:24.091 "dhgroup": "ffdhe6144", 00:18:24.091 "digest": "sha512", 00:18:24.091 "state": "completed" 00:18:24.091 }, 00:18:24.091 "cntlid": 133, 00:18:24.091 "listen_address": { 00:18:24.091 "adrfam": "IPv4", 00:18:24.091 "traddr": "10.0.0.2", 00:18:24.091 "trsvcid": "4420", 00:18:24.091 "trtype": "TCP" 00:18:24.091 }, 00:18:24.091 "peer_address": { 00:18:24.091 "adrfam": "IPv4", 00:18:24.091 "traddr": "10.0.0.1", 00:18:24.091 "trsvcid": "43272", 00:18:24.091 "trtype": "TCP" 00:18:24.091 }, 00:18:24.091 "qid": 0, 00:18:24.091 "state": "enabled", 00:18:24.091 "thread": "nvmf_tgt_poll_group_000" 00:18:24.091 } 00:18:24.091 ]' 00:18:24.091 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.349 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.608 17:19:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.174 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.432 17:19:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.998 00:18:25.999 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.999 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.999 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.257 { 00:18:26.257 "auth": { 00:18:26.257 "dhgroup": "ffdhe6144", 00:18:26.257 "digest": "sha512", 00:18:26.257 "state": "completed" 00:18:26.257 }, 00:18:26.257 "cntlid": 135, 00:18:26.257 "listen_address": { 00:18:26.257 "adrfam": "IPv4", 00:18:26.257 "traddr": "10.0.0.2", 00:18:26.257 "trsvcid": "4420", 00:18:26.257 "trtype": "TCP" 00:18:26.257 }, 00:18:26.257 "peer_address": { 00:18:26.257 "adrfam": "IPv4", 00:18:26.257 "traddr": "10.0.0.1", 00:18:26.257 "trsvcid": "43300", 00:18:26.257 "trtype": "TCP" 00:18:26.257 }, 00:18:26.257 "qid": 0, 00:18:26.257 "state": "enabled", 00:18:26.257 "thread": "nvmf_tgt_poll_group_000" 00:18:26.257 } 00:18:26.257 ]' 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.257 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.516 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.516 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.516 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.516 17:19:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.450 17:19:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.385 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.385 { 00:18:28.385 "auth": { 00:18:28.385 "dhgroup": "ffdhe8192", 00:18:28.385 "digest": "sha512", 00:18:28.385 "state": "completed" 00:18:28.385 }, 00:18:28.385 "cntlid": 137, 00:18:28.385 "listen_address": { 00:18:28.385 "adrfam": "IPv4", 00:18:28.385 "traddr": "10.0.0.2", 00:18:28.385 "trsvcid": "4420", 00:18:28.385 "trtype": "TCP" 00:18:28.385 }, 00:18:28.385 "peer_address": { 00:18:28.385 "adrfam": "IPv4", 00:18:28.385 "traddr": "10.0.0.1", 00:18:28.385 "trsvcid": "43336", 00:18:28.385 "trtype": "TCP" 00:18:28.385 }, 00:18:28.385 "qid": 0, 00:18:28.385 "state": "enabled", 00:18:28.385 "thread": "nvmf_tgt_poll_group_000" 00:18:28.385 } 00:18:28.385 ]' 00:18:28.385 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.643 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.901 17:19:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:29.468 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.035 17:19:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.602 00:18:30.602 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.602 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.602 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.860 { 00:18:30.860 "auth": { 00:18:30.860 "dhgroup": "ffdhe8192", 00:18:30.860 "digest": "sha512", 00:18:30.860 "state": "completed" 00:18:30.860 }, 00:18:30.860 "cntlid": 139, 00:18:30.860 "listen_address": { 00:18:30.860 "adrfam": "IPv4", 00:18:30.860 "traddr": "10.0.0.2", 00:18:30.860 "trsvcid": "4420", 00:18:30.860 "trtype": "TCP" 00:18:30.860 }, 00:18:30.860 "peer_address": { 00:18:30.860 "adrfam": "IPv4", 00:18:30.860 "traddr": "10.0.0.1", 00:18:30.860 "trsvcid": "33946", 00:18:30.860 "trtype": "TCP" 00:18:30.860 }, 00:18:30.860 "qid": 0, 00:18:30.860 "state": "enabled", 00:18:30.860 "thread": "nvmf_tgt_poll_group_000" 00:18:30.860 } 00:18:30.860 ]' 00:18:30.860 17:19:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.860 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:30.860 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.860 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:30.860 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.119 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.119 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.119 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.378 17:19:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:01:ZTk2MjA4NzJiNjNkZTQ4OWRiNjA3NmJkM2E3MmY5NTRplldX: --dhchap-ctrl-secret DHHC-1:02:Y2IzMjdhMTQ0YmZkY2RlMWQyNGZhNjViNWJhMTI5YmUzY2U3MjIwZTRjZDhiZDE46kAWTg==: 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:31.944 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.203 17:19:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.138 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.138 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.396 { 00:18:33.396 "auth": { 00:18:33.396 "dhgroup": "ffdhe8192", 00:18:33.396 "digest": "sha512", 00:18:33.396 "state": "completed" 00:18:33.396 }, 00:18:33.396 "cntlid": 141, 00:18:33.396 "listen_address": { 00:18:33.396 "adrfam": "IPv4", 00:18:33.396 "traddr": "10.0.0.2", 00:18:33.396 "trsvcid": "4420", 00:18:33.396 "trtype": "TCP" 00:18:33.396 }, 00:18:33.396 "peer_address": { 00:18:33.396 "adrfam": "IPv4", 00:18:33.396 "traddr": "10.0.0.1", 00:18:33.396 "trsvcid": "33990", 00:18:33.396 "trtype": "TCP" 00:18:33.396 }, 00:18:33.396 "qid": 0, 00:18:33.396 "state": "enabled", 00:18:33.396 "thread": "nvmf_tgt_poll_group_000" 00:18:33.396 } 00:18:33.396 ]' 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.396 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.655 17:19:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:02:MjE5OWJmNzJjNjhjNjg4YTFiMDczZmMxZjMyZWJlZGRhZDEyZDAxNTliMTJmMjQ4MZCn5w==: --dhchap-ctrl-secret DHHC-1:01:MDUyYjY1NWU3ODRmOTZhN2NhZTU1ZjJlMGNmMGQ0OTR0PbC6: 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.222 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:34.788 17:19:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.355 00:18:35.355 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.355 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.355 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.614 { 00:18:35.614 "auth": { 00:18:35.614 "dhgroup": "ffdhe8192", 00:18:35.614 "digest": "sha512", 00:18:35.614 "state": "completed" 00:18:35.614 }, 00:18:35.614 "cntlid": 143, 00:18:35.614 "listen_address": { 00:18:35.614 "adrfam": "IPv4", 00:18:35.614 "traddr": "10.0.0.2", 00:18:35.614 "trsvcid": "4420", 00:18:35.614 "trtype": "TCP" 00:18:35.614 }, 00:18:35.614 "peer_address": { 00:18:35.614 "adrfam": "IPv4", 00:18:35.614 "traddr": "10.0.0.1", 00:18:35.614 "trsvcid": "34028", 00:18:35.614 "trtype": "TCP" 00:18:35.614 }, 00:18:35.614 "qid": 0, 00:18:35.614 "state": "enabled", 00:18:35.614 "thread": "nvmf_tgt_poll_group_000" 00:18:35.614 } 00:18:35.614 ]' 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.614 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.872 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.872 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.872 17:19:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.131 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:18:36.697 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.697 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:36.697 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.697 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.955 17:19:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.214 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.779 00:18:37.779 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.779 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.779 17:19:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.038 { 00:18:38.038 "auth": { 00:18:38.038 "dhgroup": "ffdhe8192", 00:18:38.038 "digest": "sha512", 00:18:38.038 "state": "completed" 00:18:38.038 }, 00:18:38.038 "cntlid": 145, 00:18:38.038 "listen_address": { 00:18:38.038 "adrfam": "IPv4", 00:18:38.038 "traddr": "10.0.0.2", 00:18:38.038 "trsvcid": "4420", 00:18:38.038 "trtype": "TCP" 00:18:38.038 }, 00:18:38.038 "peer_address": { 00:18:38.038 "adrfam": "IPv4", 00:18:38.038 "traddr": "10.0.0.1", 00:18:38.038 "trsvcid": "34056", 00:18:38.038 "trtype": "TCP" 00:18:38.038 }, 00:18:38.038 "qid": 0, 00:18:38.038 "state": "enabled", 00:18:38.038 "thread": "nvmf_tgt_poll_group_000" 00:18:38.038 } 00:18:38.038 ]' 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.038 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.296 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.296 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.296 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.554 17:19:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:00:ODdiZjVjN2JlODE3MTUxOGZmN2QzMmJjNGQ0NzkwZThlZTI4OWNhODkyYzliYTQ467zLJw==: --dhchap-ctrl-secret DHHC-1:03:Y2M5NjRlZTk5YTM2OTYxMTRiOGQ0MTlkYzI4YTgxN2RiNDcyY2YzZGU1NzBiMmIzNzI1NmI0NDU3YmM0NjJiMryQLi0=: 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:39.120 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:39.685 2024/07/24 17:19:25 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:39.685 request: 00:18:39.685 { 00:18:39.685 "method": "bdev_nvme_attach_controller", 00:18:39.685 "params": { 00:18:39.686 "name": "nvme0", 00:18:39.686 "trtype": "tcp", 00:18:39.686 "traddr": "10.0.0.2", 00:18:39.686 "adrfam": "ipv4", 00:18:39.686 "trsvcid": "4420", 00:18:39.686 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.686 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b", 00:18:39.686 "prchk_reftag": false, 00:18:39.686 "prchk_guard": false, 00:18:39.686 "hdgst": false, 00:18:39.686 "ddgst": false, 00:18:39.686 "dhchap_key": "key2" 00:18:39.686 } 00:18:39.686 } 00:18:39.686 Got JSON-RPC error response 00:18:39.686 GoRPCClient: error on JSON-RPC call 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.686 17:19:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:40.626 2024/07/24 17:19:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:40.626 request: 00:18:40.626 { 00:18:40.626 "method": "bdev_nvme_attach_controller", 00:18:40.626 "params": { 00:18:40.626 "name": "nvme0", 00:18:40.626 "trtype": "tcp", 00:18:40.626 "traddr": "10.0.0.2", 00:18:40.626 "adrfam": "ipv4", 00:18:40.626 "trsvcid": "4420", 00:18:40.626 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:40.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b", 00:18:40.626 "prchk_reftag": false, 00:18:40.626 "prchk_guard": false, 00:18:40.626 "hdgst": false, 00:18:40.626 "ddgst": false, 00:18:40.626 "dhchap_key": "key1", 00:18:40.626 "dhchap_ctrlr_key": "ckey2" 00:18:40.626 } 00:18:40.626 } 00:18:40.626 Got JSON-RPC error response 00:18:40.626 GoRPCClient: error on JSON-RPC call 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key1 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.626 17:19:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.192 2024/07/24 17:19:27 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:41.192 request: 00:18:41.192 { 00:18:41.192 "method": "bdev_nvme_attach_controller", 00:18:41.192 "params": { 00:18:41.192 "name": "nvme0", 00:18:41.192 "trtype": "tcp", 00:18:41.192 "traddr": "10.0.0.2", 00:18:41.192 "adrfam": "ipv4", 00:18:41.192 "trsvcid": "4420", 00:18:41.192 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:41.192 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b", 00:18:41.192 "prchk_reftag": false, 00:18:41.192 "prchk_guard": false, 00:18:41.192 "hdgst": false, 00:18:41.192 "ddgst": false, 00:18:41.192 "dhchap_key": "key1", 00:18:41.192 "dhchap_ctrlr_key": "ckey1" 00:18:41.192 } 00:18:41.192 } 00:18:41.192 Got JSON-RPC error response 00:18:41.192 GoRPCClient: error on JSON-RPC call 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 77233 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77233 ']' 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77233 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77233 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77233' 00:18:41.192 killing process with pid 77233 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77233 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77233 00:18:41.192 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=82127 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 82127 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82127 ']' 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:41.451 17:19:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 82127 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82127 ']' 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:42.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:42.389 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.648 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:42.648 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:42.648 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:42.648 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.648 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.907 17:19:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.473 00:18:43.473 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.473 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.473 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.732 { 00:18:43.732 "auth": { 00:18:43.732 "dhgroup": "ffdhe8192", 00:18:43.732 "digest": "sha512", 00:18:43.732 "state": "completed" 00:18:43.732 }, 00:18:43.732 "cntlid": 1, 00:18:43.732 "listen_address": { 00:18:43.732 "adrfam": "IPv4", 00:18:43.732 "traddr": "10.0.0.2", 00:18:43.732 "trsvcid": "4420", 00:18:43.732 "trtype": "TCP" 00:18:43.732 }, 00:18:43.732 "peer_address": { 00:18:43.732 "adrfam": "IPv4", 00:18:43.732 "traddr": "10.0.0.1", 00:18:43.732 "trsvcid": "57810", 00:18:43.732 "trtype": "TCP" 00:18:43.732 }, 00:18:43.732 "qid": 0, 00:18:43.732 "state": "enabled", 00:18:43.732 "thread": "nvmf_tgt_poll_group_000" 00:18:43.732 } 00:18:43.732 ]' 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.732 17:19:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.055 17:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid 6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-secret DHHC-1:03:OWNmOTRmNDAzYWIxNTc0ZjRlOGMzOGRiNmY2YjU5NWM3YWZlZmUxMTBjOTI4ZDM3Mjg5OGMxN2Q4MTkzZGMxZD8648c=: 00:18:44.990 17:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.990 17:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:44.990 17:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.990 17:19:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --dhchap-key key3 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:44.990 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.248 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.506 2024/07/24 17:19:31 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:45.506 request: 00:18:45.506 { 00:18:45.506 "method": "bdev_nvme_attach_controller", 00:18:45.506 "params": { 00:18:45.506 "name": "nvme0", 00:18:45.506 "trtype": "tcp", 00:18:45.506 "traddr": "10.0.0.2", 00:18:45.506 "adrfam": "ipv4", 00:18:45.506 "trsvcid": "4420", 00:18:45.506 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:45.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b", 00:18:45.506 "prchk_reftag": false, 00:18:45.506 "prchk_guard": false, 00:18:45.506 "hdgst": false, 00:18:45.506 "ddgst": false, 00:18:45.506 "dhchap_key": "key3" 00:18:45.506 } 00:18:45.506 } 00:18:45.506 Got JSON-RPC error response 00:18:45.506 GoRPCClient: error on JSON-RPC call 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:45.506 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:45.765 17:19:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.024 2024/07/24 17:19:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:46.024 request: 00:18:46.024 { 00:18:46.024 "method": "bdev_nvme_attach_controller", 00:18:46.024 "params": { 00:18:46.024 "name": "nvme0", 00:18:46.024 "trtype": "tcp", 00:18:46.024 "traddr": "10.0.0.2", 00:18:46.024 "adrfam": "ipv4", 00:18:46.024 "trsvcid": "4420", 00:18:46.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:46.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b", 00:18:46.024 "prchk_reftag": false, 00:18:46.024 "prchk_guard": false, 00:18:46.024 "hdgst": false, 00:18:46.024 "ddgst": false, 00:18:46.024 "dhchap_key": "key3" 00:18:46.024 } 00:18:46.024 } 00:18:46.024 Got JSON-RPC error response 00:18:46.024 GoRPCClient: error on JSON-RPC call 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.024 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.283 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:46.541 2024/07/24 17:19:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:18:46.541 request: 00:18:46.541 { 00:18:46.541 "method": "bdev_nvme_attach_controller", 00:18:46.541 "params": { 00:18:46.541 "name": "nvme0", 00:18:46.541 "trtype": "tcp", 00:18:46.541 "traddr": "10.0.0.2", 00:18:46.541 "adrfam": "ipv4", 00:18:46.541 "trsvcid": "4420", 00:18:46.541 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:46.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b", 00:18:46.541 "prchk_reftag": false, 00:18:46.541 "prchk_guard": false, 00:18:46.541 "hdgst": false, 00:18:46.541 "ddgst": false, 00:18:46.541 "dhchap_key": "key0", 00:18:46.541 "dhchap_ctrlr_key": "key1" 00:18:46.541 } 00:18:46.541 } 00:18:46.541 Got JSON-RPC error response 00:18:46.541 GoRPCClient: error on JSON-RPC call 00:18:46.800 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:46.800 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:46.800 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:46.800 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:46.800 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:46.800 17:19:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:47.058 00:18:47.058 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:47.058 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:47.058 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.315 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.315 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.315 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 77277 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 77277 ']' 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 77277 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77277 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:47.573 killing process with pid 77277 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77277' 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 77277 00:18:47.573 17:19:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 77277 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:48.175 rmmod nvme_tcp 00:18:48.175 rmmod nvme_fabrics 00:18:48.175 rmmod nvme_keyring 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 82127 ']' 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 82127 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82127 ']' 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82127 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82127 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:48.175 killing process with pid 82127 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82127' 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82127 00:18:48.175 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82127 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Cuu /tmp/spdk.key-sha256.QmQ /tmp/spdk.key-sha384.5vN /tmp/spdk.key-sha512.ze6 /tmp/spdk.key-sha512.2Xq /tmp/spdk.key-sha384.PHf /tmp/spdk.key-sha256.Cef '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:18:48.434 00:18:48.434 real 2m51.799s 00:18:48.434 user 6m56.947s 00:18:48.434 sys 0m22.351s 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:48.434 ************************************ 00:18:48.434 END TEST nvmf_auth_target 00:18:48.434 ************************************ 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:48.434 ************************************ 00:18:48.434 START TEST nvmf_bdevio_no_huge 00:18:48.434 ************************************ 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:48.434 * Looking for test storage... 00:18:48.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.434 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.692 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:48.692 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:48.692 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:48.692 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:48.692 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:48.692 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:48.693 Cannot find device "nvmf_tgt_br" 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.693 Cannot find device "nvmf_tgt_br2" 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:48.693 Cannot find device "nvmf_tgt_br" 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:48.693 Cannot find device "nvmf_tgt_br2" 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:48.693 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:48.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:18:48.952 00:18:48.952 --- 10.0.0.2 ping statistics --- 00:18:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.952 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:48.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:48.952 00:18:48.952 --- 10.0.0.3 ping statistics --- 00:18:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.952 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:18:48.952 00:18:48.952 --- 10.0.0.1 ping statistics --- 00:18:48.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.952 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:48.952 17:19:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82540 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82540 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82540 ']' 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:48.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:48.952 17:19:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:48.952 [2024-07-24 17:19:35.067559] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:48.952 [2024-07-24 17:19:35.067688] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:49.210 [2024-07-24 17:19:35.220410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.210 [2024-07-24 17:19:35.345030] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.210 [2024-07-24 17:19:35.345524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.210 [2024-07-24 17:19:35.345869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.210 [2024-07-24 17:19:35.346311] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.210 [2024-07-24 17:19:35.346569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.210 [2024-07-24 17:19:35.346979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:49.210 [2024-07-24 17:19:35.347130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:49.210 [2024-07-24 17:19:35.347250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:49.210 [2024-07-24 17:19:35.347342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.142 [2024-07-24 17:19:36.129010] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.142 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.143 Malloc0 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:50.143 [2024-07-24 17:19:36.177348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:50.143 { 00:18:50.143 "params": { 00:18:50.143 "name": "Nvme$subsystem", 00:18:50.143 "trtype": "$TEST_TRANSPORT", 00:18:50.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:50.143 "adrfam": "ipv4", 00:18:50.143 "trsvcid": "$NVMF_PORT", 00:18:50.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:50.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:50.143 "hdgst": ${hdgst:-false}, 00:18:50.143 "ddgst": ${ddgst:-false} 00:18:50.143 }, 00:18:50.143 "method": "bdev_nvme_attach_controller" 00:18:50.143 } 00:18:50.143 EOF 00:18:50.143 )") 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:50.143 17:19:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:50.143 "params": { 00:18:50.143 "name": "Nvme1", 00:18:50.143 "trtype": "tcp", 00:18:50.143 "traddr": "10.0.0.2", 00:18:50.143 "adrfam": "ipv4", 00:18:50.143 "trsvcid": "4420", 00:18:50.143 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.143 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.143 "hdgst": false, 00:18:50.143 "ddgst": false 00:18:50.143 }, 00:18:50.143 "method": "bdev_nvme_attach_controller" 00:18:50.143 }' 00:18:50.143 [2024-07-24 17:19:36.238915] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:50.143 [2024-07-24 17:19:36.239013] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82594 ] 00:18:50.400 [2024-07-24 17:19:36.390345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:50.400 [2024-07-24 17:19:36.551719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.400 [2024-07-24 17:19:36.551841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.400 [2024-07-24 17:19:36.551850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.658 I/O targets: 00:18:50.658 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:50.658 00:18:50.658 00:18:50.658 CUnit - A unit testing framework for C - Version 2.1-3 00:18:50.658 http://cunit.sourceforge.net/ 00:18:50.658 00:18:50.658 00:18:50.658 Suite: bdevio tests on: Nvme1n1 00:18:50.658 Test: blockdev write read block ...passed 00:18:50.658 Test: blockdev write zeroes read block ...passed 00:18:50.658 Test: blockdev write zeroes read no split ...passed 00:18:50.658 Test: blockdev write zeroes read split ...passed 00:18:50.658 Test: blockdev write zeroes read split partial ...passed 00:18:50.658 Test: blockdev reset ...[2024-07-24 17:19:36.871647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:50.658 [2024-07-24 17:19:36.871992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1349460 (9): Bad file descriptor 00:18:50.658 [2024-07-24 17:19:36.883585] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:50.658 passed 00:18:50.658 Test: blockdev write read 8 blocks ...passed 00:18:50.658 Test: blockdev write read size > 128k ...passed 00:18:50.658 Test: blockdev write read invalid size ...passed 00:18:50.916 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:50.916 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:50.916 Test: blockdev write read max offset ...passed 00:18:50.916 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:50.916 Test: blockdev writev readv 8 blocks ...passed 00:18:50.916 Test: blockdev writev readv 30 x 1block ...passed 00:18:50.916 Test: blockdev writev readv block ...passed 00:18:50.916 Test: blockdev writev readv size > 128k ...passed 00:18:50.916 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:50.916 Test: blockdev comparev and writev ...[2024-07-24 17:19:37.058376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.916 [2024-07-24 17:19:37.058431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:50.916 [2024-07-24 17:19:37.058452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.058464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.058816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.058833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.058850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.058861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.059220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.059238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.059255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.059264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.059801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.059832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.059851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:50.917 [2024-07-24 17:19:37.059861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:50.917 passed 00:18:50.917 Test: blockdev nvme passthru rw ...passed 00:18:50.917 Test: blockdev nvme passthru vendor specific ...[2024-07-24 17:19:37.142886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.917 [2024-07-24 17:19:37.142948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.143091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.917 [2024-07-24 17:19:37.143138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.143275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.917 [2024-07-24 17:19:37.143291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:50.917 [2024-07-24 17:19:37.143412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:50.917 [2024-07-24 17:19:37.143436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:50.917 passed 00:18:51.175 Test: blockdev nvme admin passthru ...passed 00:18:51.175 Test: blockdev copy ...passed 00:18:51.175 00:18:51.175 Run Summary: Type Total Ran Passed Failed Inactive 00:18:51.175 suites 1 1 n/a 0 0 00:18:51.175 tests 23 23 23 0 0 00:18:51.175 asserts 152 152 152 0 n/a 00:18:51.175 00:18:51.175 Elapsed time = 0.919 seconds 00:18:51.432 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.433 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.691 rmmod nvme_tcp 00:18:51.691 rmmod nvme_fabrics 00:18:51.691 rmmod nvme_keyring 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82540 ']' 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82540 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82540 ']' 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82540 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82540 00:18:51.691 killing process with pid 82540 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82540' 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82540 00:18:51.691 17:19:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82540 00:18:52.258 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:52.258 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:52.258 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:52.258 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:52.258 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:52.259 00:18:52.259 real 0m3.681s 00:18:52.259 user 0m13.397s 00:18:52.259 sys 0m1.426s 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:52.259 ************************************ 00:18:52.259 END TEST nvmf_bdevio_no_huge 00:18:52.259 ************************************ 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.259 ************************************ 00:18:52.259 START TEST nvmf_tls 00:18:52.259 ************************************ 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:52.259 * Looking for test storage... 00:18:52.259 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:52.259 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:52.260 Cannot find device "nvmf_tgt_br" 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # true 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:52.260 Cannot find device "nvmf_tgt_br2" 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # true 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:52.260 Cannot find device "nvmf_tgt_br" 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # true 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:52.260 Cannot find device "nvmf_tgt_br2" 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # true 00:18:52.260 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:52.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:52.518 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:52.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:52.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:52.518 00:18:52.518 --- 10.0.0.2 ping statistics --- 00:18:52.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.518 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:52.518 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:52.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:52.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:18:52.518 00:18:52.518 --- 10.0.0.3 ping statistics --- 00:18:52.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.518 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:52.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:52.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:52.519 00:18:52.519 --- 10.0.0.1 ping statistics --- 00:18:52.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:52.519 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82778 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82778 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 82778 ']' 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:52.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:52.519 17:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.777 [2024-07-24 17:19:38.790678] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:18:52.777 [2024-07-24 17:19:38.790816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.777 [2024-07-24 17:19:38.933203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.036 [2024-07-24 17:19:39.073840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:53.036 [2024-07-24 17:19:39.073890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:53.036 [2024-07-24 17:19:39.073901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:53.036 [2024-07-24 17:19:39.073908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:53.036 [2024-07-24 17:19:39.073915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:53.036 [2024-07-24 17:19:39.073940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.603 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:53.603 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:53.603 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:53.603 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:53.603 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.862 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.862 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:53.862 17:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:54.121 true 00:18:54.121 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.121 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:54.380 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:54.380 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:54.380 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:54.638 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:54.638 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:54.897 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:54.897 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:54.897 17:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:55.155 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.155 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:55.414 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:55.414 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:55.414 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.414 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:55.672 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:55.672 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:55.672 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:55.931 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:55.931 17:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:56.189 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:56.189 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:56.189 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:56.447 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:56.447 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:56.705 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:56.705 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:56.705 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:56.705 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:56.705 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.PuYbP2rDVP 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.ZSt3ofVAtr 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.PuYbP2rDVP 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZSt3ofVAtr 00:18:56.706 17:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:56.963 17:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:57.529 17:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.PuYbP2rDVP 00:18:57.529 17:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.PuYbP2rDVP 00:18:57.529 17:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:57.787 [2024-07-24 17:19:43.786911] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:57.787 17:19:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:58.045 17:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:58.045 [2024-07-24 17:19:44.267018] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.045 [2024-07-24 17:19:44.267247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.303 17:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:58.303 malloc0 00:18:58.562 17:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:58.821 17:19:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PuYbP2rDVP 00:18:59.080 [2024-07-24 17:19:45.062484] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:59.080 17:19:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.PuYbP2rDVP 00:19:09.146 Initializing NVMe Controllers 00:19:09.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:09.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:09.146 Initialization complete. Launching workers. 00:19:09.146 ======================================================== 00:19:09.146 Latency(us) 00:19:09.146 Device Information : IOPS MiB/s Average min max 00:19:09.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9582.37 37.43 6680.44 1467.63 11351.17 00:19:09.146 ======================================================== 00:19:09.146 Total : 9582.37 37.43 6680.44 1467.63 11351.17 00:19:09.146 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PuYbP2rDVP 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PuYbP2rDVP' 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83143 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83143 /var/tmp/bdevperf.sock 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83143 ']' 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.146 17:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.146 [2024-07-24 17:19:55.341534] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:09.146 [2024-07-24 17:19:55.341631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83143 ] 00:19:09.405 [2024-07-24 17:19:55.479346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.405 [2024-07-24 17:19:55.584319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.340 17:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.340 17:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:10.340 17:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PuYbP2rDVP 00:19:10.600 [2024-07-24 17:19:56.577079] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.600 [2024-07-24 17:19:56.577209] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:10.600 TLSTESTn1 00:19:10.600 17:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:10.600 Running I/O for 10 seconds... 00:19:20.658 00:19:20.658 Latency(us) 00:19:20.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.658 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.658 Verification LBA range: start 0x0 length 0x2000 00:19:20.658 TLSTESTn1 : 10.02 4044.57 15.80 0.00 0.00 31586.20 6911.07 35031.97 00:19:20.658 =================================================================================================================== 00:19:20.658 Total : 4044.57 15.80 0.00 0.00 31586.20 6911.07 35031.97 00:19:20.658 0 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83143 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83143 ']' 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83143 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83143 00:19:20.658 killing process with pid 83143 00:19:20.658 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.658 00:19:20.658 Latency(us) 00:19:20.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.658 =================================================================================================================== 00:19:20.658 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83143' 00:19:20.658 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83143 00:19:20.659 [2024-07-24 17:20:06.826426] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:20.659 17:20:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83143 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZSt3ofVAtr 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZSt3ofVAtr 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZSt3ofVAtr 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.ZSt3ofVAtr' 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83290 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83290 /var/tmp/bdevperf.sock 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83290 ']' 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:20.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:20.918 17:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:20.918 [2024-07-24 17:20:07.114493] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:20.918 [2024-07-24 17:20:07.114624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83290 ] 00:19:21.177 [2024-07-24 17:20:07.251366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:21.177 [2024-07-24 17:20:07.369466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.112 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.112 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:22.112 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZSt3ofVAtr 00:19:22.371 [2024-07-24 17:20:08.363942] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:22.371 [2024-07-24 17:20:08.364070] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:22.371 [2024-07-24 17:20:08.370512] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:22.371 [2024-07-24 17:20:08.370680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2340ca0 (107): Transport endpoint is not connected 00:19:22.371 [2024-07-24 17:20:08.371670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2340ca0 (9): Bad file descriptor 00:19:22.371 [2024-07-24 17:20:08.372667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.371 [2024-07-24 17:20:08.372707] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:22.371 [2024-07-24 17:20:08.372753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.371 2024/07/24 17:20:08 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.ZSt3ofVAtr subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:22.371 request: 00:19:22.371 { 00:19:22.371 "method": "bdev_nvme_attach_controller", 00:19:22.371 "params": { 00:19:22.371 "name": "TLSTEST", 00:19:22.371 "trtype": "tcp", 00:19:22.371 "traddr": "10.0.0.2", 00:19:22.371 "adrfam": "ipv4", 00:19:22.371 "trsvcid": "4420", 00:19:22.371 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.371 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.371 "prchk_reftag": false, 00:19:22.371 "prchk_guard": false, 00:19:22.371 "hdgst": false, 00:19:22.371 "ddgst": false, 00:19:22.371 "psk": "/tmp/tmp.ZSt3ofVAtr" 00:19:22.371 } 00:19:22.371 } 00:19:22.371 Got JSON-RPC error response 00:19:22.371 GoRPCClient: error on JSON-RPC call 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83290 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83290 ']' 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83290 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83290 00:19:22.371 killing process with pid 83290 00:19:22.371 Received shutdown signal, test time was about 10.000000 seconds 00:19:22.371 00:19:22.371 Latency(us) 00:19:22.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.371 =================================================================================================================== 00:19:22.371 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83290' 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83290 00:19:22.371 [2024-07-24 17:20:08.423875] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:22.371 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83290 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PuYbP2rDVP 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PuYbP2rDVP 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:22.630 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.PuYbP2rDVP 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PuYbP2rDVP' 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83340 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83340 /var/tmp/bdevperf.sock 00:19:22.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83340 ']' 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.631 17:20:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:22.631 [2024-07-24 17:20:08.712063] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:22.631 [2024-07-24 17:20:08.712185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83340 ] 00:19:22.631 [2024-07-24 17:20:08.844822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.890 [2024-07-24 17:20:08.960069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.825 17:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.825 17:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:23.825 17:20:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.PuYbP2rDVP 00:19:23.825 [2024-07-24 17:20:10.002424] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:23.825 [2024-07-24 17:20:10.002536] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:23.825 [2024-07-24 17:20:10.010552] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:23.825 [2024-07-24 17:20:10.010606] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:23.825 [2024-07-24 17:20:10.010671] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:23.825 [2024-07-24 17:20:10.011161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180aca0 (107): Transport endpoint is not connected 00:19:23.825 [2024-07-24 17:20:10.012147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x180aca0 (9): Bad file descriptor 00:19:23.825 [2024-07-24 17:20:10.013142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.825 [2024-07-24 17:20:10.013169] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:23.825 [2024-07-24 17:20:10.013183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.825 2024/07/24 17:20:10 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.PuYbP2rDVP subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:23.825 request: 00:19:23.825 { 00:19:23.825 "method": "bdev_nvme_attach_controller", 00:19:23.825 "params": { 00:19:23.825 "name": "TLSTEST", 00:19:23.825 "trtype": "tcp", 00:19:23.825 "traddr": "10.0.0.2", 00:19:23.825 "adrfam": "ipv4", 00:19:23.825 "trsvcid": "4420", 00:19:23.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.825 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:23.825 "prchk_reftag": false, 00:19:23.825 "prchk_guard": false, 00:19:23.825 "hdgst": false, 00:19:23.825 "ddgst": false, 00:19:23.825 "psk": "/tmp/tmp.PuYbP2rDVP" 00:19:23.825 } 00:19:23.825 } 00:19:23.825 Got JSON-RPC error response 00:19:23.825 GoRPCClient: error on JSON-RPC call 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83340 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83340 ']' 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83340 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83340 00:19:23.825 killing process with pid 83340 00:19:23.825 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.825 00:19:23.825 Latency(us) 00:19:23.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.825 =================================================================================================================== 00:19:23.825 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83340' 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83340 00:19:23.825 [2024-07-24 17:20:10.055108] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:23.825 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83340 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PuYbP2rDVP 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PuYbP2rDVP 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.PuYbP2rDVP 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PuYbP2rDVP' 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83381 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83381 /var/tmp/bdevperf.sock 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83381 ']' 00:19:24.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:24.084 17:20:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.342 [2024-07-24 17:20:10.331330] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:24.342 [2024-07-24 17:20:10.331447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83381 ] 00:19:24.343 [2024-07-24 17:20:10.467291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.601 [2024-07-24 17:20:10.580510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:25.168 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:25.168 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:25.168 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PuYbP2rDVP 00:19:25.428 [2024-07-24 17:20:11.541031] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:25.428 [2024-07-24 17:20:11.541202] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:25.428 [2024-07-24 17:20:11.550764] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.428 [2024-07-24 17:20:11.550861] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:25.428 [2024-07-24 17:20:11.550913] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:25.428 [2024-07-24 17:20:11.551901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90ca0 (107): Transport endpoint is not connected 00:19:25.428 [2024-07-24 17:20:11.552891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90ca0 (9): Bad file descriptor 00:19:25.428 [2024-07-24 17:20:11.553887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:25.428 [2024-07-24 17:20:11.553924] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:25.428 [2024-07-24 17:20:11.553952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:25.428 2024/07/24 17:20:11 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.PuYbP2rDVP subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:25.428 request: 00:19:25.428 { 00:19:25.428 "method": "bdev_nvme_attach_controller", 00:19:25.428 "params": { 00:19:25.428 "name": "TLSTEST", 00:19:25.428 "trtype": "tcp", 00:19:25.428 "traddr": "10.0.0.2", 00:19:25.428 "adrfam": "ipv4", 00:19:25.428 "trsvcid": "4420", 00:19:25.428 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.428 "prchk_reftag": false, 00:19:25.428 "prchk_guard": false, 00:19:25.428 "hdgst": false, 00:19:25.428 "ddgst": false, 00:19:25.428 "psk": "/tmp/tmp.PuYbP2rDVP" 00:19:25.428 } 00:19:25.428 } 00:19:25.428 Got JSON-RPC error response 00:19:25.428 GoRPCClient: error on JSON-RPC call 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83381 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83381 ']' 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83381 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83381 00:19:25.428 killing process with pid 83381 00:19:25.428 Received shutdown signal, test time was about 10.000000 seconds 00:19:25.428 00:19:25.428 Latency(us) 00:19:25.428 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:25.428 =================================================================================================================== 00:19:25.428 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83381' 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83381 00:19:25.428 [2024-07-24 17:20:11.606081] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:25.428 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83381 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83427 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83427 /var/tmp/bdevperf.sock 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83427 ']' 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:25.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:25.687 17:20:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.687 [2024-07-24 17:20:11.895560] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:25.687 [2024-07-24 17:20:11.895674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83427 ] 00:19:25.946 [2024-07-24 17:20:12.034919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.946 [2024-07-24 17:20:12.147945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.880 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.880 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.880 17:20:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:26.880 [2024-07-24 17:20:13.051206] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:26.880 [2024-07-24 17:20:13.052981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b2240 (9): Bad file descriptor 00:19:26.880 [2024-07-24 17:20:13.053977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:26.880 [2024-07-24 17:20:13.054014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:26.880 [2024-07-24 17:20:13.054027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:26.880 2024/07/24 17:20:13 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:19:26.880 request: 00:19:26.880 { 00:19:26.880 "method": "bdev_nvme_attach_controller", 00:19:26.880 "params": { 00:19:26.880 "name": "TLSTEST", 00:19:26.880 "trtype": "tcp", 00:19:26.880 "traddr": "10.0.0.2", 00:19:26.880 "adrfam": "ipv4", 00:19:26.880 "trsvcid": "4420", 00:19:26.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.880 "prchk_reftag": false, 00:19:26.880 "prchk_guard": false, 00:19:26.880 "hdgst": false, 00:19:26.880 "ddgst": false 00:19:26.880 } 00:19:26.880 } 00:19:26.880 Got JSON-RPC error response 00:19:26.880 GoRPCClient: error on JSON-RPC call 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83427 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83427 ']' 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83427 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83427 00:19:26.880 killing process with pid 83427 00:19:26.880 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.880 00:19:26.880 Latency(us) 00:19:26.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.880 =================================================================================================================== 00:19:26.880 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83427' 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83427 00:19:26.880 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83427 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 82778 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 82778 ']' 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 82778 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82778 00:19:27.139 killing process with pid 82778 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82778' 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 82778 00:19:27.139 [2024-07-24 17:20:13.352378] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:27.139 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 82778 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:27.396 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.AvJ35qsIO2 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.AvJ35qsIO2 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83483 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83483 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83483 ']' 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:27.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:27.654 17:20:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.654 [2024-07-24 17:20:13.715032] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:27.654 [2024-07-24 17:20:13.715157] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:27.654 [2024-07-24 17:20:13.856342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.913 [2024-07-24 17:20:13.965731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:27.913 [2024-07-24 17:20:13.965796] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:27.913 [2024-07-24 17:20:13.965806] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:27.913 [2024-07-24 17:20:13.965814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:27.913 [2024-07-24 17:20:13.965821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:27.913 [2024-07-24 17:20:13.965845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.AvJ35qsIO2 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AvJ35qsIO2 00:19:28.891 17:20:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:28.891 [2024-07-24 17:20:15.043065] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.891 17:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.150 17:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.409 [2024-07-24 17:20:15.551179] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.409 [2024-07-24 17:20:15.551399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.409 17:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:29.667 malloc0 00:19:29.667 17:20:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:29.925 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:19:30.184 [2024-07-24 17:20:16.299313] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AvJ35qsIO2 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AvJ35qsIO2' 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83585 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83585 /var/tmp/bdevperf.sock 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83585 ']' 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.184 17:20:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.184 [2024-07-24 17:20:16.363215] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:30.184 [2024-07-24 17:20:16.363327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83585 ] 00:19:30.443 [2024-07-24 17:20:16.501889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.443 [2024-07-24 17:20:16.626213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.378 17:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.378 17:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.378 17:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:19:31.378 [2024-07-24 17:20:17.528107] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.378 [2024-07-24 17:20:17.528246] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:31.378 TLSTESTn1 00:19:31.637 17:20:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:31.637 Running I/O for 10 seconds... 00:19:41.613 00:19:41.613 Latency(us) 00:19:41.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.613 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.613 Verification LBA range: start 0x0 length 0x2000 00:19:41.613 TLSTESTn1 : 10.03 4109.56 16.05 0.00 0.00 31083.78 10366.60 23712.12 00:19:41.613 =================================================================================================================== 00:19:41.613 Total : 4109.56 16.05 0.00 0.00 31083.78 10366.60 23712.12 00:19:41.613 0 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 83585 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83585 ']' 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83585 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83585 00:19:41.613 killing process with pid 83585 00:19:41.613 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.613 00:19:41.613 Latency(us) 00:19:41.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.613 =================================================================================================================== 00:19:41.613 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83585' 00:19:41.613 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83585 00:19:41.614 [2024-07-24 17:20:27.814946] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:41.614 17:20:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83585 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.AvJ35qsIO2 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AvJ35qsIO2 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AvJ35qsIO2 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:41.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AvJ35qsIO2 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AvJ35qsIO2' 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83738 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83738 /var/tmp/bdevperf.sock 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83738 ']' 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.872 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.873 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.873 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.873 17:20:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:41.873 [2024-07-24 17:20:28.094081] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:41.873 [2024-07-24 17:20:28.094186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83738 ] 00:19:42.131 [2024-07-24 17:20:28.225494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.131 [2024-07-24 17:20:28.338300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:19:43.067 [2024-07-24 17:20:29.253941] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:43.067 [2024-07-24 17:20:29.254029] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:43.067 [2024-07-24 17:20:29.254040] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.AvJ35qsIO2 00:19:43.067 2024/07/24 17:20:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:/tmp/tmp.AvJ35qsIO2 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:19:43.067 request: 00:19:43.067 { 00:19:43.067 "method": "bdev_nvme_attach_controller", 00:19:43.067 "params": { 00:19:43.067 "name": "TLSTEST", 00:19:43.067 "trtype": "tcp", 00:19:43.067 "traddr": "10.0.0.2", 00:19:43.067 "adrfam": "ipv4", 00:19:43.067 "trsvcid": "4420", 00:19:43.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.067 "prchk_reftag": false, 00:19:43.067 "prchk_guard": false, 00:19:43.067 "hdgst": false, 00:19:43.067 "ddgst": false, 00:19:43.067 "psk": "/tmp/tmp.AvJ35qsIO2" 00:19:43.067 } 00:19:43.067 } 00:19:43.067 Got JSON-RPC error response 00:19:43.067 GoRPCClient: error on JSON-RPC call 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 83738 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83738 ']' 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83738 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.067 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83738 00:19:43.326 killing process with pid 83738 00:19:43.326 Received shutdown signal, test time was about 10.000000 seconds 00:19:43.326 00:19:43.326 Latency(us) 00:19:43.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.326 =================================================================================================================== 00:19:43.326 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83738' 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83738 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83738 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 83483 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83483 ']' 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83483 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:43.326 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:43.327 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83483 00:19:43.327 killing process with pid 83483 00:19:43.327 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:43.327 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:43.327 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83483' 00:19:43.327 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83483 00:19:43.327 [2024-07-24 17:20:29.548449] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:43.327 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83483 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83788 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83788 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83788 ']' 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:43.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:43.586 17:20:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:43.845 [2024-07-24 17:20:29.860212] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:43.845 [2024-07-24 17:20:29.860341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.845 [2024-07-24 17:20:30.000613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.104 [2024-07-24 17:20:30.109436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.104 [2024-07-24 17:20:30.109524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.104 [2024-07-24 17:20:30.109551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.104 [2024-07-24 17:20:30.109560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.104 [2024-07-24 17:20:30.109581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.104 [2024-07-24 17:20:30.109626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.AvJ35qsIO2 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.AvJ35qsIO2 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.AvJ35qsIO2 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AvJ35qsIO2 00:19:44.670 17:20:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.236 [2024-07-24 17:20:31.172096] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.236 17:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:45.236 17:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:45.494 [2024-07-24 17:20:31.656256] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:45.494 [2024-07-24 17:20:31.656468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.494 17:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:45.752 malloc0 00:19:45.752 17:20:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:19:46.318 [2024-07-24 17:20:32.520111] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:46.318 [2024-07-24 17:20:32.520178] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:46.318 [2024-07-24 17:20:32.520211] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:46.318 2024/07/24 17:20:32 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.AvJ35qsIO2], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:19:46.318 request: 00:19:46.318 { 00:19:46.318 "method": "nvmf_subsystem_add_host", 00:19:46.318 "params": { 00:19:46.318 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.318 "host": "nqn.2016-06.io.spdk:host1", 00:19:46.318 "psk": "/tmp/tmp.AvJ35qsIO2" 00:19:46.318 } 00:19:46.318 } 00:19:46.318 Got JSON-RPC error response 00:19:46.318 GoRPCClient: error on JSON-RPC call 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 83788 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83788 ']' 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83788 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:46.318 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.576 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83788 00:19:46.576 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:46.576 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:46.576 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83788' 00:19:46.576 killing process with pid 83788 00:19:46.576 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83788 00:19:46.576 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83788 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.AvJ35qsIO2 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83899 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83899 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83899 ']' 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.835 17:20:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.835 [2024-07-24 17:20:32.876074] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:46.835 [2024-07-24 17:20:32.876191] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.835 [2024-07-24 17:20:33.010936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.092 [2024-07-24 17:20:33.123834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:47.092 [2024-07-24 17:20:33.123897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:47.092 [2024-07-24 17:20:33.123909] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:47.092 [2024-07-24 17:20:33.123918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:47.092 [2024-07-24 17:20:33.123926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:47.092 [2024-07-24 17:20:33.123960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.AvJ35qsIO2 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AvJ35qsIO2 00:19:47.657 17:20:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.914 [2024-07-24 17:20:34.093789] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.914 17:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:48.172 17:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:48.429 [2024-07-24 17:20:34.573909] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:48.429 [2024-07-24 17:20:34.574136] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.429 17:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:48.687 malloc0 00:19:48.687 17:20:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:48.945 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:19:49.203 [2024-07-24 17:20:35.345346] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=84002 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 84002 /var/tmp/bdevperf.sock 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84002 ']' 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.203 17:20:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.203 [2024-07-24 17:20:35.416174] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:49.203 [2024-07-24 17:20:35.416290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84002 ] 00:19:49.460 [2024-07-24 17:20:35.549380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.460 [2024-07-24 17:20:35.663396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.394 17:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.394 17:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.394 17:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:19:50.652 [2024-07-24 17:20:36.712822] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.652 [2024-07-24 17:20:36.712934] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:50.652 TLSTESTn1 00:19:50.652 17:20:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:50.911 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:50.911 "subsystems": [ 00:19:50.911 { 00:19:50.911 "subsystem": "keyring", 00:19:50.911 "config": [] 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "subsystem": "iobuf", 00:19:50.911 "config": [ 00:19:50.911 { 00:19:50.911 "method": "iobuf_set_options", 00:19:50.911 "params": { 00:19:50.911 "large_bufsize": 135168, 00:19:50.911 "large_pool_count": 1024, 00:19:50.911 "small_bufsize": 8192, 00:19:50.911 "small_pool_count": 8192 00:19:50.911 } 00:19:50.911 } 00:19:50.911 ] 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "subsystem": "sock", 00:19:50.911 "config": [ 00:19:50.911 { 00:19:50.911 "method": "sock_set_default_impl", 00:19:50.911 "params": { 00:19:50.911 "impl_name": "posix" 00:19:50.911 } 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "method": "sock_impl_set_options", 00:19:50.911 "params": { 00:19:50.911 "enable_ktls": false, 00:19:50.911 "enable_placement_id": 0, 00:19:50.911 "enable_quickack": false, 00:19:50.911 "enable_recv_pipe": true, 00:19:50.911 "enable_zerocopy_send_client": false, 00:19:50.911 "enable_zerocopy_send_server": true, 00:19:50.911 "impl_name": "ssl", 00:19:50.911 "recv_buf_size": 4096, 00:19:50.911 "send_buf_size": 4096, 00:19:50.911 "tls_version": 0, 00:19:50.911 "zerocopy_threshold": 0 00:19:50.911 } 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "method": "sock_impl_set_options", 00:19:50.911 "params": { 00:19:50.911 "enable_ktls": false, 00:19:50.911 "enable_placement_id": 0, 00:19:50.911 "enable_quickack": false, 00:19:50.911 "enable_recv_pipe": true, 00:19:50.911 "enable_zerocopy_send_client": false, 00:19:50.911 "enable_zerocopy_send_server": true, 00:19:50.911 "impl_name": "posix", 00:19:50.911 "recv_buf_size": 2097152, 00:19:50.911 "send_buf_size": 2097152, 00:19:50.911 "tls_version": 0, 00:19:50.911 "zerocopy_threshold": 0 00:19:50.911 } 00:19:50.911 } 00:19:50.911 ] 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "subsystem": "vmd", 00:19:50.911 "config": [] 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "subsystem": "accel", 00:19:50.911 "config": [ 00:19:50.911 { 00:19:50.911 "method": "accel_set_options", 00:19:50.911 "params": { 00:19:50.911 "buf_count": 2048, 00:19:50.911 "large_cache_size": 16, 00:19:50.911 "sequence_count": 2048, 00:19:50.911 "small_cache_size": 128, 00:19:50.911 "task_count": 2048 00:19:50.911 } 00:19:50.911 } 00:19:50.911 ] 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "subsystem": "bdev", 00:19:50.911 "config": [ 00:19:50.911 { 00:19:50.911 "method": "bdev_set_options", 00:19:50.911 "params": { 00:19:50.911 "bdev_auto_examine": true, 00:19:50.911 "bdev_io_cache_size": 256, 00:19:50.911 "bdev_io_pool_size": 65535, 00:19:50.911 "iobuf_large_cache_size": 16, 00:19:50.911 "iobuf_small_cache_size": 128 00:19:50.911 } 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "method": "bdev_raid_set_options", 00:19:50.911 "params": { 00:19:50.911 "process_max_bandwidth_mb_sec": 0, 00:19:50.911 "process_window_size_kb": 1024 00:19:50.911 } 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "method": "bdev_iscsi_set_options", 00:19:50.911 "params": { 00:19:50.911 "timeout_sec": 30 00:19:50.911 } 00:19:50.911 }, 00:19:50.911 { 00:19:50.911 "method": "bdev_nvme_set_options", 00:19:50.911 "params": { 00:19:50.911 "action_on_timeout": "none", 00:19:50.911 "allow_accel_sequence": false, 00:19:50.911 "arbitration_burst": 0, 00:19:50.911 "bdev_retry_count": 3, 00:19:50.911 "ctrlr_loss_timeout_sec": 0, 00:19:50.911 "delay_cmd_submit": true, 00:19:50.911 "dhchap_dhgroups": [ 00:19:50.911 "null", 00:19:50.911 "ffdhe2048", 00:19:50.911 "ffdhe3072", 00:19:50.911 "ffdhe4096", 00:19:50.911 "ffdhe6144", 00:19:50.911 "ffdhe8192" 00:19:50.911 ], 00:19:50.911 "dhchap_digests": [ 00:19:50.911 "sha256", 00:19:50.911 "sha384", 00:19:50.911 "sha512" 00:19:50.911 ], 00:19:50.911 "disable_auto_failback": false, 00:19:50.911 "fast_io_fail_timeout_sec": 0, 00:19:50.911 "generate_uuids": false, 00:19:50.911 "high_priority_weight": 0, 00:19:50.911 "io_path_stat": false, 00:19:50.911 "io_queue_requests": 0, 00:19:50.912 "keep_alive_timeout_ms": 10000, 00:19:50.912 "low_priority_weight": 0, 00:19:50.912 "medium_priority_weight": 0, 00:19:50.912 "nvme_adminq_poll_period_us": 10000, 00:19:50.912 "nvme_error_stat": false, 00:19:50.912 "nvme_ioq_poll_period_us": 0, 00:19:50.912 "rdma_cm_event_timeout_ms": 0, 00:19:50.912 "rdma_max_cq_size": 0, 00:19:50.912 "rdma_srq_size": 0, 00:19:50.912 "reconnect_delay_sec": 0, 00:19:50.912 "timeout_admin_us": 0, 00:19:50.912 "timeout_us": 0, 00:19:50.912 "transport_ack_timeout": 0, 00:19:50.912 "transport_retry_count": 4, 00:19:50.912 "transport_tos": 0 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "bdev_nvme_set_hotplug", 00:19:50.912 "params": { 00:19:50.912 "enable": false, 00:19:50.912 "period_us": 100000 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "bdev_malloc_create", 00:19:50.912 "params": { 00:19:50.912 "block_size": 4096, 00:19:50.912 "dif_is_head_of_md": false, 00:19:50.912 "dif_pi_format": 0, 00:19:50.912 "dif_type": 0, 00:19:50.912 "md_size": 0, 00:19:50.912 "name": "malloc0", 00:19:50.912 "num_blocks": 8192, 00:19:50.912 "optimal_io_boundary": 0, 00:19:50.912 "physical_block_size": 4096, 00:19:50.912 "uuid": "a099bffc-2434-4702-bbad-670385023f51" 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "bdev_wait_for_examine" 00:19:50.912 } 00:19:50.912 ] 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "subsystem": "nbd", 00:19:50.912 "config": [] 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "subsystem": "scheduler", 00:19:50.912 "config": [ 00:19:50.912 { 00:19:50.912 "method": "framework_set_scheduler", 00:19:50.912 "params": { 00:19:50.912 "name": "static" 00:19:50.912 } 00:19:50.912 } 00:19:50.912 ] 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "subsystem": "nvmf", 00:19:50.912 "config": [ 00:19:50.912 { 00:19:50.912 "method": "nvmf_set_config", 00:19:50.912 "params": { 00:19:50.912 "admin_cmd_passthru": { 00:19:50.912 "identify_ctrlr": false 00:19:50.912 }, 00:19:50.912 "discovery_filter": "match_any" 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_set_max_subsystems", 00:19:50.912 "params": { 00:19:50.912 "max_subsystems": 1024 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_set_crdt", 00:19:50.912 "params": { 00:19:50.912 "crdt1": 0, 00:19:50.912 "crdt2": 0, 00:19:50.912 "crdt3": 0 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_create_transport", 00:19:50.912 "params": { 00:19:50.912 "abort_timeout_sec": 1, 00:19:50.912 "ack_timeout": 0, 00:19:50.912 "buf_cache_size": 4294967295, 00:19:50.912 "c2h_success": false, 00:19:50.912 "data_wr_pool_size": 0, 00:19:50.912 "dif_insert_or_strip": false, 00:19:50.912 "in_capsule_data_size": 4096, 00:19:50.912 "io_unit_size": 131072, 00:19:50.912 "max_aq_depth": 128, 00:19:50.912 "max_io_qpairs_per_ctrlr": 127, 00:19:50.912 "max_io_size": 131072, 00:19:50.912 "max_queue_depth": 128, 00:19:50.912 "num_shared_buffers": 511, 00:19:50.912 "sock_priority": 0, 00:19:50.912 "trtype": "TCP", 00:19:50.912 "zcopy": false 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_create_subsystem", 00:19:50.912 "params": { 00:19:50.912 "allow_any_host": false, 00:19:50.912 "ana_reporting": false, 00:19:50.912 "max_cntlid": 65519, 00:19:50.912 "max_namespaces": 10, 00:19:50.912 "min_cntlid": 1, 00:19:50.912 "model_number": "SPDK bdev Controller", 00:19:50.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.912 "serial_number": "SPDK00000000000001" 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_subsystem_add_host", 00:19:50.912 "params": { 00:19:50.912 "host": "nqn.2016-06.io.spdk:host1", 00:19:50.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.912 "psk": "/tmp/tmp.AvJ35qsIO2" 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_subsystem_add_ns", 00:19:50.912 "params": { 00:19:50.912 "namespace": { 00:19:50.912 "bdev_name": "malloc0", 00:19:50.912 "nguid": "A099BFFC24344702BBAD670385023F51", 00:19:50.912 "no_auto_visible": false, 00:19:50.912 "nsid": 1, 00:19:50.912 "uuid": "a099bffc-2434-4702-bbad-670385023f51" 00:19:50.912 }, 00:19:50.912 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:50.912 } 00:19:50.912 }, 00:19:50.912 { 00:19:50.912 "method": "nvmf_subsystem_add_listener", 00:19:50.912 "params": { 00:19:50.912 "listen_address": { 00:19:50.912 "adrfam": "IPv4", 00:19:50.912 "traddr": "10.0.0.2", 00:19:50.912 "trsvcid": "4420", 00:19:50.912 "trtype": "TCP" 00:19:50.912 }, 00:19:50.912 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.912 "secure_channel": true 00:19:50.912 } 00:19:50.912 } 00:19:50.912 ] 00:19:50.912 } 00:19:50.912 ] 00:19:50.912 }' 00:19:50.912 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:51.479 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:51.479 "subsystems": [ 00:19:51.479 { 00:19:51.479 "subsystem": "keyring", 00:19:51.479 "config": [] 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "subsystem": "iobuf", 00:19:51.479 "config": [ 00:19:51.479 { 00:19:51.479 "method": "iobuf_set_options", 00:19:51.479 "params": { 00:19:51.479 "large_bufsize": 135168, 00:19:51.479 "large_pool_count": 1024, 00:19:51.479 "small_bufsize": 8192, 00:19:51.479 "small_pool_count": 8192 00:19:51.479 } 00:19:51.479 } 00:19:51.479 ] 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "subsystem": "sock", 00:19:51.479 "config": [ 00:19:51.479 { 00:19:51.479 "method": "sock_set_default_impl", 00:19:51.479 "params": { 00:19:51.479 "impl_name": "posix" 00:19:51.479 } 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "method": "sock_impl_set_options", 00:19:51.479 "params": { 00:19:51.479 "enable_ktls": false, 00:19:51.479 "enable_placement_id": 0, 00:19:51.479 "enable_quickack": false, 00:19:51.479 "enable_recv_pipe": true, 00:19:51.479 "enable_zerocopy_send_client": false, 00:19:51.479 "enable_zerocopy_send_server": true, 00:19:51.479 "impl_name": "ssl", 00:19:51.479 "recv_buf_size": 4096, 00:19:51.479 "send_buf_size": 4096, 00:19:51.479 "tls_version": 0, 00:19:51.479 "zerocopy_threshold": 0 00:19:51.479 } 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "method": "sock_impl_set_options", 00:19:51.479 "params": { 00:19:51.479 "enable_ktls": false, 00:19:51.479 "enable_placement_id": 0, 00:19:51.479 "enable_quickack": false, 00:19:51.479 "enable_recv_pipe": true, 00:19:51.479 "enable_zerocopy_send_client": false, 00:19:51.479 "enable_zerocopy_send_server": true, 00:19:51.479 "impl_name": "posix", 00:19:51.479 "recv_buf_size": 2097152, 00:19:51.479 "send_buf_size": 2097152, 00:19:51.479 "tls_version": 0, 00:19:51.479 "zerocopy_threshold": 0 00:19:51.479 } 00:19:51.479 } 00:19:51.479 ] 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "subsystem": "vmd", 00:19:51.479 "config": [] 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "subsystem": "accel", 00:19:51.479 "config": [ 00:19:51.479 { 00:19:51.479 "method": "accel_set_options", 00:19:51.479 "params": { 00:19:51.479 "buf_count": 2048, 00:19:51.479 "large_cache_size": 16, 00:19:51.479 "sequence_count": 2048, 00:19:51.479 "small_cache_size": 128, 00:19:51.479 "task_count": 2048 00:19:51.479 } 00:19:51.479 } 00:19:51.479 ] 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "subsystem": "bdev", 00:19:51.479 "config": [ 00:19:51.479 { 00:19:51.479 "method": "bdev_set_options", 00:19:51.479 "params": { 00:19:51.479 "bdev_auto_examine": true, 00:19:51.479 "bdev_io_cache_size": 256, 00:19:51.479 "bdev_io_pool_size": 65535, 00:19:51.479 "iobuf_large_cache_size": 16, 00:19:51.479 "iobuf_small_cache_size": 128 00:19:51.479 } 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "method": "bdev_raid_set_options", 00:19:51.479 "params": { 00:19:51.479 "process_max_bandwidth_mb_sec": 0, 00:19:51.479 "process_window_size_kb": 1024 00:19:51.479 } 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "method": "bdev_iscsi_set_options", 00:19:51.479 "params": { 00:19:51.479 "timeout_sec": 30 00:19:51.479 } 00:19:51.479 }, 00:19:51.479 { 00:19:51.479 "method": "bdev_nvme_set_options", 00:19:51.479 "params": { 00:19:51.479 "action_on_timeout": "none", 00:19:51.479 "allow_accel_sequence": false, 00:19:51.479 "arbitration_burst": 0, 00:19:51.479 "bdev_retry_count": 3, 00:19:51.479 "ctrlr_loss_timeout_sec": 0, 00:19:51.479 "delay_cmd_submit": true, 00:19:51.479 "dhchap_dhgroups": [ 00:19:51.479 "null", 00:19:51.479 "ffdhe2048", 00:19:51.479 "ffdhe3072", 00:19:51.479 "ffdhe4096", 00:19:51.479 "ffdhe6144", 00:19:51.479 "ffdhe8192" 00:19:51.479 ], 00:19:51.479 "dhchap_digests": [ 00:19:51.479 "sha256", 00:19:51.479 "sha384", 00:19:51.479 "sha512" 00:19:51.479 ], 00:19:51.479 "disable_auto_failback": false, 00:19:51.479 "fast_io_fail_timeout_sec": 0, 00:19:51.479 "generate_uuids": false, 00:19:51.479 "high_priority_weight": 0, 00:19:51.479 "io_path_stat": false, 00:19:51.479 "io_queue_requests": 512, 00:19:51.479 "keep_alive_timeout_ms": 10000, 00:19:51.479 "low_priority_weight": 0, 00:19:51.479 "medium_priority_weight": 0, 00:19:51.479 "nvme_adminq_poll_period_us": 10000, 00:19:51.479 "nvme_error_stat": false, 00:19:51.479 "nvme_ioq_poll_period_us": 0, 00:19:51.479 "rdma_cm_event_timeout_ms": 0, 00:19:51.479 "rdma_max_cq_size": 0, 00:19:51.479 "rdma_srq_size": 0, 00:19:51.479 "reconnect_delay_sec": 0, 00:19:51.479 "timeout_admin_us": 0, 00:19:51.479 "timeout_us": 0, 00:19:51.479 "transport_ack_timeout": 0, 00:19:51.479 "transport_retry_count": 4, 00:19:51.480 "transport_tos": 0 00:19:51.480 } 00:19:51.480 }, 00:19:51.480 { 00:19:51.480 "method": "bdev_nvme_attach_controller", 00:19:51.480 "params": { 00:19:51.480 "adrfam": "IPv4", 00:19:51.480 "ctrlr_loss_timeout_sec": 0, 00:19:51.480 "ddgst": false, 00:19:51.480 "fast_io_fail_timeout_sec": 0, 00:19:51.480 "hdgst": false, 00:19:51.480 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.480 "name": "TLSTEST", 00:19:51.480 "prchk_guard": false, 00:19:51.480 "prchk_reftag": false, 00:19:51.480 "psk": "/tmp/tmp.AvJ35qsIO2", 00:19:51.480 "reconnect_delay_sec": 0, 00:19:51.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.480 "traddr": "10.0.0.2", 00:19:51.480 "trsvcid": "4420", 00:19:51.480 "trtype": "TCP" 00:19:51.480 } 00:19:51.480 }, 00:19:51.480 { 00:19:51.480 "method": "bdev_nvme_set_hotplug", 00:19:51.480 "params": { 00:19:51.480 "enable": false, 00:19:51.480 "period_us": 100000 00:19:51.480 } 00:19:51.480 }, 00:19:51.480 { 00:19:51.480 "method": "bdev_wait_for_examine" 00:19:51.480 } 00:19:51.480 ] 00:19:51.480 }, 00:19:51.480 { 00:19:51.480 "subsystem": "nbd", 00:19:51.480 "config": [] 00:19:51.480 } 00:19:51.480 ] 00:19:51.480 }' 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 84002 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84002 ']' 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84002 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84002 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:51.480 killing process with pid 84002 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84002' 00:19:51.480 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.480 00:19:51.480 Latency(us) 00:19:51.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.480 =================================================================================================================== 00:19:51.480 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84002 00:19:51.480 [2024-07-24 17:20:37.508622] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:51.480 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84002 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 83899 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83899 ']' 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83899 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83899 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:51.738 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:51.739 killing process with pid 83899 00:19:51.739 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83899' 00:19:51.739 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83899 00:19:51.739 [2024-07-24 17:20:37.760388] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:51.739 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83899 00:19:51.998 17:20:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:51.998 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:51.998 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:51.998 "subsystems": [ 00:19:51.998 { 00:19:51.998 "subsystem": "keyring", 00:19:51.998 "config": [] 00:19:51.998 }, 00:19:51.998 { 00:19:51.998 "subsystem": "iobuf", 00:19:51.998 "config": [ 00:19:51.998 { 00:19:51.998 "method": "iobuf_set_options", 00:19:51.998 "params": { 00:19:51.998 "large_bufsize": 135168, 00:19:51.998 "large_pool_count": 1024, 00:19:51.998 "small_bufsize": 8192, 00:19:51.998 "small_pool_count": 8192 00:19:51.998 } 00:19:51.998 } 00:19:51.998 ] 00:19:51.998 }, 00:19:51.998 { 00:19:51.998 "subsystem": "sock", 00:19:51.998 "config": [ 00:19:51.998 { 00:19:51.998 "method": "sock_set_default_impl", 00:19:51.998 "params": { 00:19:51.998 "impl_name": "posix" 00:19:51.998 } 00:19:51.998 }, 00:19:51.998 { 00:19:51.998 "method": "sock_impl_set_options", 00:19:51.998 "params": { 00:19:51.998 "enable_ktls": false, 00:19:51.998 "enable_placement_id": 0, 00:19:51.998 "enable_quickack": false, 00:19:51.998 "enable_recv_pipe": true, 00:19:51.998 "enable_zerocopy_send_client": false, 00:19:51.998 "enable_zerocopy_send_server": true, 00:19:51.998 "impl_name": "ssl", 00:19:51.998 "recv_buf_size": 4096, 00:19:51.998 "send_buf_size": 4096, 00:19:51.999 "tls_version": 0, 00:19:51.999 "zerocopy_threshold": 0 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "sock_impl_set_options", 00:19:51.999 "params": { 00:19:51.999 "enable_ktls": false, 00:19:51.999 "enable_placement_id": 0, 00:19:51.999 "enable_quickack": false, 00:19:51.999 "enable_recv_pipe": true, 00:19:51.999 "enable_zerocopy_send_client": false, 00:19:51.999 "enable_zerocopy_send_server": true, 00:19:51.999 "impl_name": "posix", 00:19:51.999 "recv_buf_size": 2097152, 00:19:51.999 "send_buf_size": 2097152, 00:19:51.999 "tls_version": 0, 00:19:51.999 "zerocopy_threshold": 0 00:19:51.999 } 00:19:51.999 } 00:19:51.999 ] 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "subsystem": "vmd", 00:19:51.999 "config": [] 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "subsystem": "accel", 00:19:51.999 "config": [ 00:19:51.999 { 00:19:51.999 "method": "accel_set_options", 00:19:51.999 "params": { 00:19:51.999 "buf_count": 2048, 00:19:51.999 "large_cache_size": 16, 00:19:51.999 "sequence_count": 2048, 00:19:51.999 "small_cache_size": 128, 00:19:51.999 "task_count": 2048 00:19:51.999 } 00:19:51.999 } 00:19:51.999 ] 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "subsystem": "bdev", 00:19:51.999 "config": [ 00:19:51.999 { 00:19:51.999 "method": "bdev_set_options", 00:19:51.999 "params": { 00:19:51.999 "bdev_auto_examine": true, 00:19:51.999 "bdev_io_cache_size": 256, 00:19:51.999 "bdev_io_pool_size": 65535, 00:19:51.999 "iobuf_large_cache_size": 16, 00:19:51.999 "iobuf_small_cache_size": 128 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "bdev_raid_set_options", 00:19:51.999 "params": { 00:19:51.999 "process_max_bandwidth_mb_sec": 0, 00:19:51.999 "process_window_size_kb": 1024 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "bdev_iscsi_set_options", 00:19:51.999 "params": { 00:19:51.999 "timeout_sec": 30 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "bdev_nvme_set_options", 00:19:51.999 "params": { 00:19:51.999 "action_on_timeout": "none", 00:19:51.999 "allow_accel_sequence": false, 00:19:51.999 "arbitration_burst": 0, 00:19:51.999 "bdev_retry_count": 3, 00:19:51.999 "ctrlr_loss_timeout_sec": 0, 00:19:51.999 "delay_cmd_submit": true, 00:19:51.999 "dhchap_dhgroups": [ 00:19:51.999 "null", 00:19:51.999 "ffdhe2048", 00:19:51.999 "ffdhe3072", 00:19:51.999 "ffdhe4096", 00:19:51.999 "ffdhe6144", 00:19:51.999 "ffdhe8192" 00:19:51.999 ], 00:19:51.999 "dhchap_digests": [ 00:19:51.999 "sha256", 00:19:51.999 "sha384", 00:19:51.999 "sha512" 00:19:51.999 ], 00:19:51.999 "disable_auto_failback": false, 00:19:51.999 "fast_io_fail_timeout_sec": 0, 00:19:51.999 "generate_uuids": false, 00:19:51.999 "high_priority_weight": 0, 00:19:51.999 "io_path_stat": false, 00:19:51.999 "io_queue_requests": 0, 00:19:51.999 "keep_alive_timeout_ms": 10000, 00:19:51.999 "low_priority_weight": 0, 00:19:51.999 "medium_priority_weight": 0, 00:19:51.999 "nvme_adminq_poll_period_us": 10000, 00:19:51.999 "nvme_error_stat": false, 00:19:51.999 "nvme_ioq_poll_period_us": 0, 00:19:51.999 "rdma_cm_event_timeout_ms": 0, 00:19:51.999 "rdma_max_cq_size": 0, 00:19:51.999 "rdma_srq_size": 0, 00:19:51.999 "reconnect_delay_sec": 0, 00:19:51.999 "timeout_admin_us": 0, 00:19:51.999 "timeout_us": 0, 00:19:51.999 "transport_ack_timeout": 0, 00:19:51.999 "transport_retry_count": 4, 00:19:51.999 "transport_tos": 0 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "bdev_nvme_set_hotplug", 00:19:51.999 "params": { 00:19:51.999 "enable": false, 00:19:51.999 "period_us": 100000 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "bdev_malloc_create", 00:19:51.999 "params": { 00:19:51.999 "block_size": 4096, 00:19:51.999 "dif_is_head_of_md": false, 00:19:51.999 "dif_pi_format": 0, 00:19:51.999 "dif_type": 0, 00:19:51.999 "md_size": 0, 00:19:51.999 "name": "malloc0", 00:19:51.999 "num_blocks": 8192, 00:19:51.999 "optimal_io_boundary": 0, 00:19:51.999 "physical_block_size": 4096, 00:19:51.999 "uuid": "a099bffc-2434-4702-bbad-670385023f51" 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "bdev_wait_for_examine" 00:19:51.999 } 00:19:51.999 ] 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "subsystem": "nbd", 00:19:51.999 "config": [] 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "subsystem": "scheduler", 00:19:51.999 "config": [ 00:19:51.999 { 00:19:51.999 "method": "framework_set_scheduler", 00:19:51.999 "params": { 00:19:51.999 "name": "static" 00:19:51.999 } 00:19:51.999 } 00:19:51.999 ] 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "subsystem": "nvmf", 00:19:51.999 "config": [ 00:19:51.999 { 00:19:51.999 "method": "nvmf_set_config", 00:19:51.999 "params": { 00:19:51.999 "admin_cmd_passthru": { 00:19:51.999 "identify_ctrlr": false 00:19:51.999 }, 00:19:51.999 "discovery_filter": "match_any" 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_set_max_subsystems", 00:19:51.999 "params": { 00:19:51.999 "max_subsystems": 1024 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_set_crdt", 00:19:51.999 "params": { 00:19:51.999 "crdt1": 0, 00:19:51.999 "crdt2": 0, 00:19:51.999 "crdt3": 0 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_create_transport", 00:19:51.999 "params": { 00:19:51.999 "abort_timeout_sec": 1, 00:19:51.999 "ack_timeout": 0, 00:19:51.999 "buf_cache_size": 4294967295, 00:19:51.999 "c2h_success": false, 00:19:51.999 "data_wr_pool_size": 0, 00:19:51.999 "dif_insert_or_strip": false, 00:19:51.999 "in_capsule_data_size": 4096, 00:19:51.999 "io_unit_size": 131072, 00:19:51.999 "max_aq_depth": 128, 00:19:51.999 "max_io_qpairs_per_ctrlr": 127, 00:19:51.999 "max_io_size": 131072, 00:19:51.999 "max_queue_depth": 128, 00:19:51.999 "num_shared_buffers": 511, 00:19:51.999 "sock_priority": 0, 00:19:51.999 "trtype": "TCP", 00:19:51.999 "zcopy": false 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_create_subsystem", 00:19:51.999 "params": { 00:19:51.999 "allow_any_host": false, 00:19:51.999 "ana_reporting": false, 00:19:51.999 "max_cntlid": 65519, 00:19:51.999 "max_namespaces": 10, 00:19:51.999 "min_cntlid": 1, 00:19:51.999 "model_number": "SPDK bdev Controller", 00:19:51.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.999 "serial_number": "SPDK00000000000001" 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_subsystem_add_host", 00:19:51.999 "params": { 00:19:51.999 "host": "nqn.2016-06.io.spdk:host1", 00:19:51.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.999 "psk": "/tmp/tmp.AvJ35qsIO2" 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_subsystem_add_ns", 00:19:51.999 "params": { 00:19:51.999 "namespace": { 00:19:51.999 "bdev_name": "malloc0", 00:19:51.999 "nguid": "A099BFFC24344702BBAD670385023F51", 00:19:51.999 "no_auto_visible": false, 00:19:51.999 "nsid": 1, 00:19:51.999 "uuid": "a099bffc-2434-4702-bbad-670385023f51" 00:19:51.999 }, 00:19:51.999 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:19:51.999 } 00:19:51.999 }, 00:19:51.999 { 00:19:51.999 "method": "nvmf_subsystem_add_listener", 00:19:51.999 "params": { 00:19:51.999 "listen_address": { 00:19:51.999 "adrfam": "IPv4", 00:19:51.999 "traddr": "10.0.0.2", 00:19:51.999 "trsvcid": "4420", 00:19:51.999 "trtype": "TCP" 00:19:51.999 }, 00:19:51.999 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.999 "secure_channel": true 00:19:51.999 } 00:19:51.999 } 00:19:51.999 ] 00:19:51.999 } 00:19:51.999 ] 00:19:51.999 }' 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84080 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84080 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84080 ']' 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.999 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.000 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.000 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.000 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.000 [2024-07-24 17:20:38.060890] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:52.000 [2024-07-24 17:20:38.061002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.000 [2024-07-24 17:20:38.192680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.258 [2024-07-24 17:20:38.301104] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.258 [2024-07-24 17:20:38.301190] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.258 [2024-07-24 17:20:38.301203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.258 [2024-07-24 17:20:38.301211] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.258 [2024-07-24 17:20:38.301218] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.258 [2024-07-24 17:20:38.301304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:52.516 [2024-07-24 17:20:38.533614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:52.516 [2024-07-24 17:20:38.549524] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:52.516 [2024-07-24 17:20:38.565569] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:52.516 [2024-07-24 17:20:38.565797] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.775 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.775 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:52.775 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:52.775 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:52.775 17:20:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84123 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84123 /var/tmp/bdevperf.sock 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84123 ']' 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.034 17:20:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:53.034 "subsystems": [ 00:19:53.034 { 00:19:53.034 "subsystem": "keyring", 00:19:53.034 "config": [] 00:19:53.034 }, 00:19:53.034 { 00:19:53.034 "subsystem": "iobuf", 00:19:53.034 "config": [ 00:19:53.034 { 00:19:53.034 "method": "iobuf_set_options", 00:19:53.034 "params": { 00:19:53.034 "large_bufsize": 135168, 00:19:53.034 "large_pool_count": 1024, 00:19:53.034 "small_bufsize": 8192, 00:19:53.035 "small_pool_count": 8192 00:19:53.035 } 00:19:53.035 } 00:19:53.035 ] 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "subsystem": "sock", 00:19:53.035 "config": [ 00:19:53.035 { 00:19:53.035 "method": "sock_set_default_impl", 00:19:53.035 "params": { 00:19:53.035 "impl_name": "posix" 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "sock_impl_set_options", 00:19:53.035 "params": { 00:19:53.035 "enable_ktls": false, 00:19:53.035 "enable_placement_id": 0, 00:19:53.035 "enable_quickack": false, 00:19:53.035 "enable_recv_pipe": true, 00:19:53.035 "enable_zerocopy_send_client": false, 00:19:53.035 "enable_zerocopy_send_server": true, 00:19:53.035 "impl_name": "ssl", 00:19:53.035 "recv_buf_size": 4096, 00:19:53.035 "send_buf_size": 4096, 00:19:53.035 "tls_version": 0, 00:19:53.035 "zerocopy_threshold": 0 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "sock_impl_set_options", 00:19:53.035 "params": { 00:19:53.035 "enable_ktls": false, 00:19:53.035 "enable_placement_id": 0, 00:19:53.035 "enable_quickack": false, 00:19:53.035 "enable_recv_pipe": true, 00:19:53.035 "enable_zerocopy_send_client": false, 00:19:53.035 "enable_zerocopy_send_server": true, 00:19:53.035 "impl_name": "posix", 00:19:53.035 "recv_buf_size": 2097152, 00:19:53.035 "send_buf_size": 2097152, 00:19:53.035 "tls_version": 0, 00:19:53.035 "zerocopy_threshold": 0 00:19:53.035 } 00:19:53.035 } 00:19:53.035 ] 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "subsystem": "vmd", 00:19:53.035 "config": [] 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "subsystem": "accel", 00:19:53.035 "config": [ 00:19:53.035 { 00:19:53.035 "method": "accel_set_options", 00:19:53.035 "params": { 00:19:53.035 "buf_count": 2048, 00:19:53.035 "large_cache_size": 16, 00:19:53.035 "sequence_count": 2048, 00:19:53.035 "small_cache_size": 128, 00:19:53.035 "task_count": 2048 00:19:53.035 } 00:19:53.035 } 00:19:53.035 ] 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "subsystem": "bdev", 00:19:53.035 "config": [ 00:19:53.035 { 00:19:53.035 "method": "bdev_set_options", 00:19:53.035 "params": { 00:19:53.035 "bdev_auto_examine": true, 00:19:53.035 "bdev_io_cache_size": 256, 00:19:53.035 "bdev_io_pool_size": 65535, 00:19:53.035 "iobuf_large_cache_size": 16, 00:19:53.035 "iobuf_small_cache_size": 128 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "bdev_raid_set_options", 00:19:53.035 "params": { 00:19:53.035 "process_max_bandwidth_mb_sec": 0, 00:19:53.035 "process_window_size_kb": 1024 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "bdev_iscsi_set_options", 00:19:53.035 "params": { 00:19:53.035 "timeout_sec": 30 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "bdev_nvme_set_options", 00:19:53.035 "params": { 00:19:53.035 "action_on_timeout": "none", 00:19:53.035 "allow_accel_sequence": false, 00:19:53.035 "arbitration_burst": 0, 00:19:53.035 "bdev_retry_count": 3, 00:19:53.035 "ctrlr_loss_timeout_sec": 0, 00:19:53.035 "delay_cmd_submit": true, 00:19:53.035 "dhchap_dhgroups": [ 00:19:53.035 "null", 00:19:53.035 "ffdhe2048", 00:19:53.035 "ffdhe3072", 00:19:53.035 "ffdhe4096", 00:19:53.035 "ffdhe6144", 00:19:53.035 "ffdhe8192" 00:19:53.035 ], 00:19:53.035 "dhchap_digests": [ 00:19:53.035 "sha256", 00:19:53.035 "sha384", 00:19:53.035 "sha512" 00:19:53.035 ], 00:19:53.035 "disable_auto_failback": false, 00:19:53.035 "fast_io_fail_timeout_sec": 0, 00:19:53.035 "generate_uuids": false, 00:19:53.035 "high_priority_weight": 0, 00:19:53.035 "io_path_stat": false, 00:19:53.035 "io_queue_requests": 512, 00:19:53.035 "keep_alive_timeout_ms": 10000, 00:19:53.035 "low_priority_weight": 0, 00:19:53.035 "medium_priority_weight": 0, 00:19:53.035 "nvme_adminq_poll_period_us": 10000, 00:19:53.035 "nvme_error_stat": false, 00:19:53.035 "nvme_ioq_poll_period_us": 0, 00:19:53.035 "rdma_cm_event_timeout_ms": 0, 00:19:53.035 "rdma_max_cq_size": 0, 00:19:53.035 "rdma_srq_size": 0, 00:19:53.035 "reconnect_delay_sec": 0, 00:19:53.035 "timeout_admin_us": 0, 00:19:53.035 "timeout_us": 0, 00:19:53.035 "transport_ack_timeout": 0, 00:19:53.035 "transport_retry_count": 4, 00:19:53.035 "transport_tos": 0 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "bdev_nvme_attach_controller", 00:19:53.035 "params": { 00:19:53.035 "adrfam": "IPv4", 00:19:53.035 "ctrlr_loss_timeout_sec": 0, 00:19:53.035 "ddgst": false, 00:19:53.035 "fast_io_fail_timeout_sec": 0, 00:19:53.035 "hdgst": false, 00:19:53.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.035 "name": "TLSTEST", 00:19:53.035 "prchk_guard": false, 00:19:53.035 "prchk_reftag": false, 00:19:53.035 "psk": "/tmp/tmp.AvJ35qsIO2", 00:19:53.035 "reconnect_delay_sec": 0, 00:19:53.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.035 "traddr": "10.0.0.2", 00:19:53.035 "trsvcid": "4420", 00:19:53.035 "trtype": "TCP" 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "bdev_nvme_set_hotplug", 00:19:53.035 "params": { 00:19:53.035 "enable": false, 00:19:53.035 "period_us": 100000 00:19:53.035 } 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "method": "bdev_wait_for_examine" 00:19:53.035 } 00:19:53.035 ] 00:19:53.035 }, 00:19:53.035 { 00:19:53.035 "subsystem": "nbd", 00:19:53.035 "config": [] 00:19:53.035 } 00:19:53.035 ] 00:19:53.035 }' 00:19:53.035 [2024-07-24 17:20:39.072927] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:19:53.035 [2024-07-24 17:20:39.073024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84123 ] 00:19:53.035 [2024-07-24 17:20:39.211328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.294 [2024-07-24 17:20:39.327546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.294 [2024-07-24 17:20:39.498150] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:53.294 [2024-07-24 17:20:39.498282] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:53.861 17:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.861 17:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.861 17:20:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:54.119 Running I/O for 10 seconds... 00:20:04.086 00:20:04.086 Latency(us) 00:20:04.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.086 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:04.086 Verification LBA range: start 0x0 length 0x2000 00:20:04.086 TLSTESTn1 : 10.02 3683.51 14.39 0.00 0.00 34692.59 5362.04 47662.55 00:20:04.086 =================================================================================================================== 00:20:04.086 Total : 3683.51 14.39 0.00 0.00 34692.59 5362.04 47662.55 00:20:04.086 0 00:20:04.086 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:04.086 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 84123 00:20:04.086 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84123 ']' 00:20:04.086 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84123 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84123 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:04.087 killing process with pid 84123 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84123' 00:20:04.087 Received shutdown signal, test time was about 10.000000 seconds 00:20:04.087 00:20:04.087 Latency(us) 00:20:04.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:04.087 =================================================================================================================== 00:20:04.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84123 00:20:04.087 [2024-07-24 17:20:50.191679] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:04.087 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84123 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 84080 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84080 ']' 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84080 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84080 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:04.344 killing process with pid 84080 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84080' 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84080 00:20:04.344 [2024-07-24 17:20:50.548092] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:04.344 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84080 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84275 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84275 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84275 ']' 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.604 17:20:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:04.865 [2024-07-24 17:20:50.867266] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:04.865 [2024-07-24 17:20:50.867411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.865 [2024-07-24 17:20:51.007555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.122 [2024-07-24 17:20:51.141422] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.122 [2024-07-24 17:20:51.141498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.122 [2024-07-24 17:20:51.141525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:05.122 [2024-07-24 17:20:51.141536] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:05.122 [2024-07-24 17:20:51.141546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.122 [2024-07-24 17:20:51.141579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.AvJ35qsIO2 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.AvJ35qsIO2 00:20:05.686 17:20:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:05.943 [2024-07-24 17:20:52.109963] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.943 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:06.201 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:06.458 [2024-07-24 17:20:52.682167] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.458 [2024-07-24 17:20:52.682408] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.715 17:20:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:06.973 malloc0 00:20:06.973 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:07.231 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AvJ35qsIO2 00:20:07.488 [2024-07-24 17:20:53.550529] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84378 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84378 /var/tmp/bdevperf.sock 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84378 ']' 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.488 17:20:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.488 [2024-07-24 17:20:53.631217] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:07.488 [2024-07-24 17:20:53.631355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84378 ] 00:20:07.744 [2024-07-24 17:20:53.771848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.744 [2024-07-24 17:20:53.894737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.677 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.677 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:08.677 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AvJ35qsIO2 00:20:08.935 17:20:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:08.935 [2024-07-24 17:20:55.139558] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.193 nvme0n1 00:20:09.193 17:20:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.193 Running I/O for 1 seconds... 00:20:10.129 00:20:10.129 Latency(us) 00:20:10.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.129 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:10.129 Verification LBA range: start 0x0 length 0x2000 00:20:10.129 nvme0n1 : 1.03 3866.39 15.10 0.00 0.00 32725.85 9532.51 21805.61 00:20:10.130 =================================================================================================================== 00:20:10.130 Total : 3866.39 15.10 0.00 0.00 32725.85 9532.51 21805.61 00:20:10.130 0 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 84378 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84378 ']' 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84378 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84378 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:10.389 killing process with pid 84378 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84378' 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84378 00:20:10.389 Received shutdown signal, test time was about 1.000000 seconds 00:20:10.389 00:20:10.389 Latency(us) 00:20:10.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:10.389 =================================================================================================================== 00:20:10.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:10.389 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84378 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 84275 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84275 ']' 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84275 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84275 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:10.649 killing process with pid 84275 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84275' 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84275 00:20:10.649 [2024-07-24 17:20:56.667354] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:10.649 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84275 00:20:10.907 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84453 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84453 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84453 ']' 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.908 17:20:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.908 [2024-07-24 17:20:56.950421] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:10.908 [2024-07-24 17:20:56.950560] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:10.908 [2024-07-24 17:20:57.082238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.166 [2024-07-24 17:20:57.185583] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.166 [2024-07-24 17:20:57.185647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.166 [2024-07-24 17:20:57.185675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.166 [2024-07-24 17:20:57.185684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.166 [2024-07-24 17:20:57.185691] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.166 [2024-07-24 17:20:57.185717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.733 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:11.733 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:11.733 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.733 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:11.733 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.733 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.734 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:20:11.734 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.734 17:20:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.992 [2024-07-24 17:20:57.974612] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.992 malloc0 00:20:11.992 [2024-07-24 17:20:58.006313] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:11.992 [2024-07-24 17:20:58.006564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.992 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=84503 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 84503 /var/tmp/bdevperf.sock 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84503 ']' 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:11.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:11.993 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:11.993 [2024-07-24 17:20:58.084680] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:11.993 [2024-07-24 17:20:58.084791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84503 ] 00:20:11.993 [2024-07-24 17:20:58.222548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.251 [2024-07-24 17:20:58.341183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.251 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:12.251 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:12.251 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.AvJ35qsIO2 00:20:12.818 17:20:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:12.818 [2024-07-24 17:20:58.955506] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:12.818 nvme0n1 00:20:12.818 17:20:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:13.077 Running I/O for 1 seconds... 00:20:14.014 00:20:14.014 Latency(us) 00:20:14.014 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.014 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:14.014 Verification LBA range: start 0x0 length 0x2000 00:20:14.014 nvme0n1 : 1.03 4077.26 15.93 0.00 0.00 30958.02 6881.28 19065.02 00:20:14.014 =================================================================================================================== 00:20:14.014 Total : 4077.26 15.93 0.00 0.00 30958.02 6881.28 19065.02 00:20:14.014 0 00:20:14.014 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:20:14.014 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.014 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.273 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.273 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:20:14.273 "subsystems": [ 00:20:14.273 { 00:20:14.273 "subsystem": "keyring", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "keyring_file_add_key", 00:20:14.273 "params": { 00:20:14.273 "name": "key0", 00:20:14.273 "path": "/tmp/tmp.AvJ35qsIO2" 00:20:14.273 } 00:20:14.273 } 00:20:14.273 ] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "iobuf", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "iobuf_set_options", 00:20:14.273 "params": { 00:20:14.273 "large_bufsize": 135168, 00:20:14.273 "large_pool_count": 1024, 00:20:14.273 "small_bufsize": 8192, 00:20:14.273 "small_pool_count": 8192 00:20:14.273 } 00:20:14.273 } 00:20:14.273 ] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "sock", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "sock_set_default_impl", 00:20:14.273 "params": { 00:20:14.273 "impl_name": "posix" 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "sock_impl_set_options", 00:20:14.273 "params": { 00:20:14.273 "enable_ktls": false, 00:20:14.273 "enable_placement_id": 0, 00:20:14.273 "enable_quickack": false, 00:20:14.273 "enable_recv_pipe": true, 00:20:14.273 "enable_zerocopy_send_client": false, 00:20:14.273 "enable_zerocopy_send_server": true, 00:20:14.273 "impl_name": "ssl", 00:20:14.273 "recv_buf_size": 4096, 00:20:14.273 "send_buf_size": 4096, 00:20:14.273 "tls_version": 0, 00:20:14.273 "zerocopy_threshold": 0 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "sock_impl_set_options", 00:20:14.273 "params": { 00:20:14.273 "enable_ktls": false, 00:20:14.273 "enable_placement_id": 0, 00:20:14.273 "enable_quickack": false, 00:20:14.273 "enable_recv_pipe": true, 00:20:14.273 "enable_zerocopy_send_client": false, 00:20:14.273 "enable_zerocopy_send_server": true, 00:20:14.273 "impl_name": "posix", 00:20:14.273 "recv_buf_size": 2097152, 00:20:14.273 "send_buf_size": 2097152, 00:20:14.273 "tls_version": 0, 00:20:14.273 "zerocopy_threshold": 0 00:20:14.273 } 00:20:14.273 } 00:20:14.273 ] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "vmd", 00:20:14.273 "config": [] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "accel", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "accel_set_options", 00:20:14.273 "params": { 00:20:14.273 "buf_count": 2048, 00:20:14.273 "large_cache_size": 16, 00:20:14.273 "sequence_count": 2048, 00:20:14.273 "small_cache_size": 128, 00:20:14.273 "task_count": 2048 00:20:14.273 } 00:20:14.273 } 00:20:14.273 ] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "bdev", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "bdev_set_options", 00:20:14.273 "params": { 00:20:14.273 "bdev_auto_examine": true, 00:20:14.273 "bdev_io_cache_size": 256, 00:20:14.273 "bdev_io_pool_size": 65535, 00:20:14.273 "iobuf_large_cache_size": 16, 00:20:14.273 "iobuf_small_cache_size": 128 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "bdev_raid_set_options", 00:20:14.273 "params": { 00:20:14.273 "process_max_bandwidth_mb_sec": 0, 00:20:14.273 "process_window_size_kb": 1024 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "bdev_iscsi_set_options", 00:20:14.273 "params": { 00:20:14.273 "timeout_sec": 30 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "bdev_nvme_set_options", 00:20:14.273 "params": { 00:20:14.273 "action_on_timeout": "none", 00:20:14.273 "allow_accel_sequence": false, 00:20:14.273 "arbitration_burst": 0, 00:20:14.273 "bdev_retry_count": 3, 00:20:14.273 "ctrlr_loss_timeout_sec": 0, 00:20:14.273 "delay_cmd_submit": true, 00:20:14.273 "dhchap_dhgroups": [ 00:20:14.273 "null", 00:20:14.273 "ffdhe2048", 00:20:14.273 "ffdhe3072", 00:20:14.273 "ffdhe4096", 00:20:14.273 "ffdhe6144", 00:20:14.273 "ffdhe8192" 00:20:14.273 ], 00:20:14.273 "dhchap_digests": [ 00:20:14.273 "sha256", 00:20:14.273 "sha384", 00:20:14.273 "sha512" 00:20:14.273 ], 00:20:14.273 "disable_auto_failback": false, 00:20:14.273 "fast_io_fail_timeout_sec": 0, 00:20:14.273 "generate_uuids": false, 00:20:14.273 "high_priority_weight": 0, 00:20:14.273 "io_path_stat": false, 00:20:14.273 "io_queue_requests": 0, 00:20:14.273 "keep_alive_timeout_ms": 10000, 00:20:14.273 "low_priority_weight": 0, 00:20:14.273 "medium_priority_weight": 0, 00:20:14.273 "nvme_adminq_poll_period_us": 10000, 00:20:14.273 "nvme_error_stat": false, 00:20:14.273 "nvme_ioq_poll_period_us": 0, 00:20:14.273 "rdma_cm_event_timeout_ms": 0, 00:20:14.273 "rdma_max_cq_size": 0, 00:20:14.273 "rdma_srq_size": 0, 00:20:14.273 "reconnect_delay_sec": 0, 00:20:14.273 "timeout_admin_us": 0, 00:20:14.273 "timeout_us": 0, 00:20:14.273 "transport_ack_timeout": 0, 00:20:14.273 "transport_retry_count": 4, 00:20:14.273 "transport_tos": 0 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "bdev_nvme_set_hotplug", 00:20:14.273 "params": { 00:20:14.273 "enable": false, 00:20:14.273 "period_us": 100000 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "bdev_malloc_create", 00:20:14.273 "params": { 00:20:14.273 "block_size": 4096, 00:20:14.273 "dif_is_head_of_md": false, 00:20:14.273 "dif_pi_format": 0, 00:20:14.273 "dif_type": 0, 00:20:14.273 "md_size": 0, 00:20:14.273 "name": "malloc0", 00:20:14.273 "num_blocks": 8192, 00:20:14.273 "optimal_io_boundary": 0, 00:20:14.273 "physical_block_size": 4096, 00:20:14.273 "uuid": "b95342b1-e59a-4ba8-9987-c33084c3a910" 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "bdev_wait_for_examine" 00:20:14.273 } 00:20:14.273 ] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "nbd", 00:20:14.273 "config": [] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "scheduler", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "framework_set_scheduler", 00:20:14.273 "params": { 00:20:14.273 "name": "static" 00:20:14.273 } 00:20:14.273 } 00:20:14.273 ] 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "subsystem": "nvmf", 00:20:14.273 "config": [ 00:20:14.273 { 00:20:14.273 "method": "nvmf_set_config", 00:20:14.273 "params": { 00:20:14.273 "admin_cmd_passthru": { 00:20:14.273 "identify_ctrlr": false 00:20:14.273 }, 00:20:14.273 "discovery_filter": "match_any" 00:20:14.273 } 00:20:14.273 }, 00:20:14.273 { 00:20:14.273 "method": "nvmf_set_max_subsystems", 00:20:14.273 "params": { 00:20:14.273 "max_subsystems": 1024 00:20:14.274 } 00:20:14.274 }, 00:20:14.274 { 00:20:14.274 "method": "nvmf_set_crdt", 00:20:14.274 "params": { 00:20:14.274 "crdt1": 0, 00:20:14.274 "crdt2": 0, 00:20:14.274 "crdt3": 0 00:20:14.274 } 00:20:14.274 }, 00:20:14.274 { 00:20:14.274 "method": "nvmf_create_transport", 00:20:14.274 "params": { 00:20:14.274 "abort_timeout_sec": 1, 00:20:14.274 "ack_timeout": 0, 00:20:14.274 "buf_cache_size": 4294967295, 00:20:14.274 "c2h_success": false, 00:20:14.274 "data_wr_pool_size": 0, 00:20:14.274 "dif_insert_or_strip": false, 00:20:14.274 "in_capsule_data_size": 4096, 00:20:14.274 "io_unit_size": 131072, 00:20:14.274 "max_aq_depth": 128, 00:20:14.274 "max_io_qpairs_per_ctrlr": 127, 00:20:14.274 "max_io_size": 131072, 00:20:14.274 "max_queue_depth": 128, 00:20:14.274 "num_shared_buffers": 511, 00:20:14.274 "sock_priority": 0, 00:20:14.274 "trtype": "TCP", 00:20:14.274 "zcopy": false 00:20:14.274 } 00:20:14.274 }, 00:20:14.274 { 00:20:14.274 "method": "nvmf_create_subsystem", 00:20:14.274 "params": { 00:20:14.274 "allow_any_host": false, 00:20:14.274 "ana_reporting": false, 00:20:14.274 "max_cntlid": 65519, 00:20:14.274 "max_namespaces": 32, 00:20:14.274 "min_cntlid": 1, 00:20:14.274 "model_number": "SPDK bdev Controller", 00:20:14.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.274 "serial_number": "00000000000000000000" 00:20:14.274 } 00:20:14.274 }, 00:20:14.274 { 00:20:14.274 "method": "nvmf_subsystem_add_host", 00:20:14.274 "params": { 00:20:14.274 "host": "nqn.2016-06.io.spdk:host1", 00:20:14.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.274 "psk": "key0" 00:20:14.274 } 00:20:14.274 }, 00:20:14.274 { 00:20:14.274 "method": "nvmf_subsystem_add_ns", 00:20:14.274 "params": { 00:20:14.274 "namespace": { 00:20:14.274 "bdev_name": "malloc0", 00:20:14.274 "nguid": "B95342B1E59A4BA89987C33084C3A910", 00:20:14.274 "no_auto_visible": false, 00:20:14.274 "nsid": 1, 00:20:14.274 "uuid": "b95342b1-e59a-4ba8-9987-c33084c3a910" 00:20:14.274 }, 00:20:14.274 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:14.274 } 00:20:14.274 }, 00:20:14.274 { 00:20:14.274 "method": "nvmf_subsystem_add_listener", 00:20:14.274 "params": { 00:20:14.274 "listen_address": { 00:20:14.274 "adrfam": "IPv4", 00:20:14.274 "traddr": "10.0.0.2", 00:20:14.274 "trsvcid": "4420", 00:20:14.274 "trtype": "TCP" 00:20:14.274 }, 00:20:14.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.274 "secure_channel": false, 00:20:14.274 "sock_impl": "ssl" 00:20:14.274 } 00:20:14.274 } 00:20:14.274 ] 00:20:14.274 } 00:20:14.274 ] 00:20:14.274 }' 00:20:14.274 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:14.535 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:20:14.535 "subsystems": [ 00:20:14.535 { 00:20:14.535 "subsystem": "keyring", 00:20:14.535 "config": [ 00:20:14.535 { 00:20:14.535 "method": "keyring_file_add_key", 00:20:14.535 "params": { 00:20:14.535 "name": "key0", 00:20:14.535 "path": "/tmp/tmp.AvJ35qsIO2" 00:20:14.535 } 00:20:14.535 } 00:20:14.535 ] 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "subsystem": "iobuf", 00:20:14.535 "config": [ 00:20:14.535 { 00:20:14.535 "method": "iobuf_set_options", 00:20:14.535 "params": { 00:20:14.535 "large_bufsize": 135168, 00:20:14.535 "large_pool_count": 1024, 00:20:14.535 "small_bufsize": 8192, 00:20:14.535 "small_pool_count": 8192 00:20:14.535 } 00:20:14.535 } 00:20:14.535 ] 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "subsystem": "sock", 00:20:14.535 "config": [ 00:20:14.535 { 00:20:14.535 "method": "sock_set_default_impl", 00:20:14.535 "params": { 00:20:14.535 "impl_name": "posix" 00:20:14.535 } 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "method": "sock_impl_set_options", 00:20:14.535 "params": { 00:20:14.535 "enable_ktls": false, 00:20:14.535 "enable_placement_id": 0, 00:20:14.535 "enable_quickack": false, 00:20:14.535 "enable_recv_pipe": true, 00:20:14.535 "enable_zerocopy_send_client": false, 00:20:14.535 "enable_zerocopy_send_server": true, 00:20:14.535 "impl_name": "ssl", 00:20:14.535 "recv_buf_size": 4096, 00:20:14.535 "send_buf_size": 4096, 00:20:14.535 "tls_version": 0, 00:20:14.535 "zerocopy_threshold": 0 00:20:14.535 } 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "method": "sock_impl_set_options", 00:20:14.535 "params": { 00:20:14.535 "enable_ktls": false, 00:20:14.535 "enable_placement_id": 0, 00:20:14.535 "enable_quickack": false, 00:20:14.535 "enable_recv_pipe": true, 00:20:14.535 "enable_zerocopy_send_client": false, 00:20:14.535 "enable_zerocopy_send_server": true, 00:20:14.535 "impl_name": "posix", 00:20:14.535 "recv_buf_size": 2097152, 00:20:14.535 "send_buf_size": 2097152, 00:20:14.535 "tls_version": 0, 00:20:14.535 "zerocopy_threshold": 0 00:20:14.535 } 00:20:14.535 } 00:20:14.535 ] 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "subsystem": "vmd", 00:20:14.535 "config": [] 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "subsystem": "accel", 00:20:14.535 "config": [ 00:20:14.535 { 00:20:14.535 "method": "accel_set_options", 00:20:14.535 "params": { 00:20:14.535 "buf_count": 2048, 00:20:14.535 "large_cache_size": 16, 00:20:14.535 "sequence_count": 2048, 00:20:14.535 "small_cache_size": 128, 00:20:14.535 "task_count": 2048 00:20:14.535 } 00:20:14.535 } 00:20:14.535 ] 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "subsystem": "bdev", 00:20:14.535 "config": [ 00:20:14.535 { 00:20:14.535 "method": "bdev_set_options", 00:20:14.535 "params": { 00:20:14.535 "bdev_auto_examine": true, 00:20:14.535 "bdev_io_cache_size": 256, 00:20:14.535 "bdev_io_pool_size": 65535, 00:20:14.535 "iobuf_large_cache_size": 16, 00:20:14.535 "iobuf_small_cache_size": 128 00:20:14.535 } 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "method": "bdev_raid_set_options", 00:20:14.535 "params": { 00:20:14.535 "process_max_bandwidth_mb_sec": 0, 00:20:14.535 "process_window_size_kb": 1024 00:20:14.535 } 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "method": "bdev_iscsi_set_options", 00:20:14.535 "params": { 00:20:14.535 "timeout_sec": 30 00:20:14.535 } 00:20:14.535 }, 00:20:14.535 { 00:20:14.535 "method": "bdev_nvme_set_options", 00:20:14.535 "params": { 00:20:14.535 "action_on_timeout": "none", 00:20:14.535 "allow_accel_sequence": false, 00:20:14.535 "arbitration_burst": 0, 00:20:14.535 "bdev_retry_count": 3, 00:20:14.535 "ctrlr_loss_timeout_sec": 0, 00:20:14.535 "delay_cmd_submit": true, 00:20:14.535 "dhchap_dhgroups": [ 00:20:14.535 "null", 00:20:14.535 "ffdhe2048", 00:20:14.535 "ffdhe3072", 00:20:14.535 "ffdhe4096", 00:20:14.535 "ffdhe6144", 00:20:14.535 "ffdhe8192" 00:20:14.535 ], 00:20:14.535 "dhchap_digests": [ 00:20:14.535 "sha256", 00:20:14.535 "sha384", 00:20:14.536 "sha512" 00:20:14.536 ], 00:20:14.536 "disable_auto_failback": false, 00:20:14.536 "fast_io_fail_timeout_sec": 0, 00:20:14.536 "generate_uuids": false, 00:20:14.536 "high_priority_weight": 0, 00:20:14.536 "io_path_stat": false, 00:20:14.536 "io_queue_requests": 512, 00:20:14.536 "keep_alive_timeout_ms": 10000, 00:20:14.536 "low_priority_weight": 0, 00:20:14.536 "medium_priority_weight": 0, 00:20:14.536 "nvme_adminq_poll_period_us": 10000, 00:20:14.536 "nvme_error_stat": false, 00:20:14.536 "nvme_ioq_poll_period_us": 0, 00:20:14.536 "rdma_cm_event_timeout_ms": 0, 00:20:14.536 "rdma_max_cq_size": 0, 00:20:14.536 "rdma_srq_size": 0, 00:20:14.536 "reconnect_delay_sec": 0, 00:20:14.536 "timeout_admin_us": 0, 00:20:14.536 "timeout_us": 0, 00:20:14.536 "transport_ack_timeout": 0, 00:20:14.536 "transport_retry_count": 4, 00:20:14.536 "transport_tos": 0 00:20:14.536 } 00:20:14.536 }, 00:20:14.536 { 00:20:14.536 "method": "bdev_nvme_attach_controller", 00:20:14.536 "params": { 00:20:14.536 "adrfam": "IPv4", 00:20:14.536 "ctrlr_loss_timeout_sec": 0, 00:20:14.536 "ddgst": false, 00:20:14.536 "fast_io_fail_timeout_sec": 0, 00:20:14.536 "hdgst": false, 00:20:14.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:14.536 "name": "nvme0", 00:20:14.536 "prchk_guard": false, 00:20:14.536 "prchk_reftag": false, 00:20:14.536 "psk": "key0", 00:20:14.536 "reconnect_delay_sec": 0, 00:20:14.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.536 "traddr": "10.0.0.2", 00:20:14.536 "trsvcid": "4420", 00:20:14.536 "trtype": "TCP" 00:20:14.536 } 00:20:14.536 }, 00:20:14.536 { 00:20:14.536 "method": "bdev_nvme_set_hotplug", 00:20:14.536 "params": { 00:20:14.536 "enable": false, 00:20:14.536 "period_us": 100000 00:20:14.536 } 00:20:14.536 }, 00:20:14.536 { 00:20:14.536 "method": "bdev_enable_histogram", 00:20:14.536 "params": { 00:20:14.536 "enable": true, 00:20:14.536 "name": "nvme0n1" 00:20:14.536 } 00:20:14.536 }, 00:20:14.536 { 00:20:14.536 "method": "bdev_wait_for_examine" 00:20:14.536 } 00:20:14.536 ] 00:20:14.536 }, 00:20:14.536 { 00:20:14.536 "subsystem": "nbd", 00:20:14.536 "config": [] 00:20:14.536 } 00:20:14.536 ] 00:20:14.536 }' 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 84503 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84503 ']' 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84503 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84503 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:14.536 killing process with pid 84503 00:20:14.536 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.536 00:20:14.536 Latency(us) 00:20:14.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.536 =================================================================================================================== 00:20:14.536 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84503' 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84503 00:20:14.536 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84503 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 84453 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84453 ']' 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84453 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84453 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:14.795 killing process with pid 84453 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84453' 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84453 00:20:14.795 17:21:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84453 00:20:15.054 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:20:15.054 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:15.054 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:15.054 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:20:15.054 "subsystems": [ 00:20:15.054 { 00:20:15.054 "subsystem": "keyring", 00:20:15.054 "config": [ 00:20:15.054 { 00:20:15.054 "method": "keyring_file_add_key", 00:20:15.054 "params": { 00:20:15.054 "name": "key0", 00:20:15.054 "path": "/tmp/tmp.AvJ35qsIO2" 00:20:15.054 } 00:20:15.054 } 00:20:15.054 ] 00:20:15.054 }, 00:20:15.054 { 00:20:15.054 "subsystem": "iobuf", 00:20:15.054 "config": [ 00:20:15.054 { 00:20:15.054 "method": "iobuf_set_options", 00:20:15.054 "params": { 00:20:15.054 "large_bufsize": 135168, 00:20:15.054 "large_pool_count": 1024, 00:20:15.054 "small_bufsize": 8192, 00:20:15.054 "small_pool_count": 8192 00:20:15.054 } 00:20:15.054 } 00:20:15.054 ] 00:20:15.054 }, 00:20:15.054 { 00:20:15.055 "subsystem": "sock", 00:20:15.055 "config": [ 00:20:15.055 { 00:20:15.055 "method": "sock_set_default_impl", 00:20:15.055 "params": { 00:20:15.055 "impl_name": "posix" 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "sock_impl_set_options", 00:20:15.055 "params": { 00:20:15.055 "enable_ktls": false, 00:20:15.055 "enable_placement_id": 0, 00:20:15.055 "enable_quickack": false, 00:20:15.055 "enable_recv_pipe": true, 00:20:15.055 "enable_zerocopy_send_client": false, 00:20:15.055 "enable_zerocopy_send_server": true, 00:20:15.055 "impl_name": "ssl", 00:20:15.055 "recv_buf_size": 4096, 00:20:15.055 "send_buf_size": 4096, 00:20:15.055 "tls_version": 0, 00:20:15.055 "zerocopy_threshold": 0 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "sock_impl_set_options", 00:20:15.055 "params": { 00:20:15.055 "enable_ktls": false, 00:20:15.055 "enable_placement_id": 0, 00:20:15.055 "enable_quickack": false, 00:20:15.055 "enable_recv_pipe": true, 00:20:15.055 "enable_zerocopy_send_client": false, 00:20:15.055 "enable_zerocopy_send_server": true, 00:20:15.055 "impl_name": "posix", 00:20:15.055 "recv_buf_size": 2097152, 00:20:15.055 "send_buf_size": 2097152, 00:20:15.055 "tls_version": 0, 00:20:15.055 "zerocopy_threshold": 0 00:20:15.055 } 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "subsystem": "vmd", 00:20:15.055 "config": [] 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "subsystem": "accel", 00:20:15.055 "config": [ 00:20:15.055 { 00:20:15.055 "method": "accel_set_options", 00:20:15.055 "params": { 00:20:15.055 "buf_count": 2048, 00:20:15.055 "large_cache_size": 16, 00:20:15.055 "sequence_count": 2048, 00:20:15.055 "small_cache_size": 128, 00:20:15.055 "task_count": 2048 00:20:15.055 } 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "subsystem": "bdev", 00:20:15.055 "config": [ 00:20:15.055 { 00:20:15.055 "method": "bdev_set_options", 00:20:15.055 "params": { 00:20:15.055 "bdev_auto_examine": true, 00:20:15.055 "bdev_io_cache_size": 256, 00:20:15.055 "bdev_io_pool_size": 65535, 00:20:15.055 "iobuf_large_cache_size": 16, 00:20:15.055 "iobuf_small_cache_size": 128 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "bdev_raid_set_options", 00:20:15.055 "params": { 00:20:15.055 "process_max_bandwidth_mb_sec": 0, 00:20:15.055 "process_window_size_kb": 1024 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "bdev_iscsi_set_options", 00:20:15.055 "params": { 00:20:15.055 "timeout_sec": 30 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "bdev_nvme_set_options", 00:20:15.055 "params": { 00:20:15.055 "action_on_timeout": "none", 00:20:15.055 "allow_accel_sequence": false, 00:20:15.055 "arbitration_burst": 0, 00:20:15.055 "bdev_retry_count": 3, 00:20:15.055 "ctrlr_loss_timeout_sec": 0, 00:20:15.055 "delay_cmd_submit": true, 00:20:15.055 "dhchap_dhgroups": [ 00:20:15.055 "null", 00:20:15.055 "ffdhe2048", 00:20:15.055 "ffdhe3072", 00:20:15.055 "ffdhe4096", 00:20:15.055 "ffdhe6144", 00:20:15.055 "ffdhe8192" 00:20:15.055 ], 00:20:15.055 "dhchap_digests": [ 00:20:15.055 "sha256", 00:20:15.055 "sha384", 00:20:15.055 "sha512" 00:20:15.055 ], 00:20:15.055 "disable_auto_failback": false, 00:20:15.055 "fast_io_fail_timeout_sec": 0, 00:20:15.055 "generate_uuids": false, 00:20:15.055 "high_priority_weight": 0, 00:20:15.055 "io_path_stat": false, 00:20:15.055 "io_queue_requests": 0, 00:20:15.055 "keep_alive_timeout_ms": 10000, 00:20:15.055 "low_priority_weight": 0, 00:20:15.055 "medium_priority_weight": 0, 00:20:15.055 "nvme_adminq_poll_period_us": 10000, 00:20:15.055 "nvme_error_stat": false, 00:20:15.055 "nvme_ioq_poll_period_us": 0, 00:20:15.055 "rdma_cm_event_timeout_ms": 0, 00:20:15.055 "rdma_max_cq_size": 0, 00:20:15.055 "rdma_srq_size": 0, 00:20:15.055 "reconnect_delay_sec": 0, 00:20:15.055 "timeout_admin_us": 0, 00:20:15.055 "timeout_us": 0, 00:20:15.055 "transport_ack_timeout": 0, 00:20:15.055 "transport_retry_count": 4, 00:20:15.055 "transport_tos": 0 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "bdev_nvme_set_hotplug", 00:20:15.055 "params": { 00:20:15.055 "enable": false, 00:20:15.055 "period_us": 100000 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "bdev_malloc_create", 00:20:15.055 "params": { 00:20:15.055 "block_size": 4096, 00:20:15.055 "dif_is_head_of_md": false, 00:20:15.055 "dif_pi_format": 0, 00:20:15.055 "dif_type": 0, 00:20:15.055 "md_size": 0, 00:20:15.055 "name": "malloc0", 00:20:15.055 "num_blocks": 8192, 00:20:15.055 "optimal_io_boundary": 0, 00:20:15.055 "physical_block_size": 4096, 00:20:15.055 "uuid": "b95342b1-e59a-4ba8-9987-c33084c3a910" 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "bdev_wait_for_examine" 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "subsystem": "nbd", 00:20:15.055 "config": [] 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "subsystem": "scheduler", 00:20:15.055 "config": [ 00:20:15.055 { 00:20:15.055 "method": "framework_set_scheduler", 00:20:15.055 "params": { 00:20:15.055 "name": "static" 00:20:15.055 } 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "subsystem": "nvmf", 00:20:15.055 "config": [ 00:20:15.055 { 00:20:15.055 "method": "nvmf_set_config", 00:20:15.055 "params": { 00:20:15.055 "admin_cmd_passthru": { 00:20:15.055 "identify_ctrlr": false 00:20:15.055 }, 00:20:15.055 "discovery_filter": "match_any" 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_set_max_subsystems", 00:20:15.055 "params": { 00:20:15.055 "max_subsystems": 1024 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_set_crdt", 00:20:15.055 "params": { 00:20:15.055 "crdt1": 0, 00:20:15.055 "crdt2": 0, 00:20:15.055 "crdt3": 0 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_create_transport", 00:20:15.055 "params": { 00:20:15.055 "abort_timeout_sec": 1, 00:20:15.055 "ack_timeout": 0, 00:20:15.055 "buf_cache_size": 4294967295, 00:20:15.055 "c2h_success": false, 00:20:15.055 "data_wr_pool_size": 0, 00:20:15.055 "dif_insert_or_strip": false, 00:20:15.055 "in_capsule_data_size": 4096, 00:20:15.055 "io_unit_size": 131072, 00:20:15.055 "max_aq_depth": 128, 00:20:15.055 "max_io_qpairs_per_ctrlr": 127, 00:20:15.055 "max_io_size": 131072, 00:20:15.055 "max_queue_depth": 128, 00:20:15.055 "num_shared_buffers": 511, 00:20:15.055 "sock_priority": 0, 00:20:15.055 "trtype": "TCP", 00:20:15.055 "zcopy": false 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_create_subsystem", 00:20:15.055 "params": { 00:20:15.055 "allow_any_host": false, 00:20:15.055 "ana_reporting": false, 00:20:15.055 "max_cntlid": 65519, 00:20:15.055 "max_namespaces": 32, 00:20:15.055 "min_cntlid": 1, 00:20:15.055 "model_number": "SPDK bdev Controller", 00:20:15.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.055 "serial_number": "00000000000000000000" 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_subsystem_add_host", 00:20:15.055 "params": { 00:20:15.055 "host": "nqn.2016-06.io.spdk:host1", 00:20:15.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.055 "psk": "key0" 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_subsystem_add_ns", 00:20:15.055 "params": { 00:20:15.055 "namespace": { 00:20:15.055 "bdev_name": "malloc0", 00:20:15.055 "nguid": "B95342B1E59A4BA89987C33084C3A910", 00:20:15.055 "no_auto_visible": false, 00:20:15.055 "nsid": 1, 00:20:15.055 "uuid": "b95342b1-e59a-4ba8-9987-c33084c3a910" 00:20:15.055 }, 00:20:15.055 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:20:15.055 } 00:20:15.055 }, 00:20:15.055 { 00:20:15.055 "method": "nvmf_subsystem_add_listener", 00:20:15.055 "params": { 00:20:15.055 "listen_address": { 00:20:15.055 "adrfam": "IPv4", 00:20:15.055 "traddr": "10.0.0.2", 00:20:15.055 "trsvcid": "4420", 00:20:15.055 "trtype": "TCP" 00:20:15.055 }, 00:20:15.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.055 "secure_channel": false, 00:20:15.055 "sock_impl": "ssl" 00:20:15.055 } 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 } 00:20:15.055 ] 00:20:15.055 }' 00:20:15.055 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84579 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84579 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84579 ']' 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:15.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:15.056 17:21:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:15.314 [2024-07-24 17:21:01.304265] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:15.314 [2024-07-24 17:21:01.304369] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:15.314 [2024-07-24 17:21:01.445001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.573 [2024-07-24 17:21:01.556674] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:15.573 [2024-07-24 17:21:01.556727] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:15.573 [2024-07-24 17:21:01.556754] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:15.573 [2024-07-24 17:21:01.556762] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:15.573 [2024-07-24 17:21:01.556769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:15.573 [2024-07-24 17:21:01.556846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.573 [2024-07-24 17:21:01.796213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.831 [2024-07-24 17:21:01.828128] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.831 [2024-07-24 17:21:01.828382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.088 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:16.088 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:16.088 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:16.088 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:16.088 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=84620 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 84620 /var/tmp/bdevperf.sock 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84620 ']' 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:16.347 17:21:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:20:16.347 "subsystems": [ 00:20:16.347 { 00:20:16.347 "subsystem": "keyring", 00:20:16.347 "config": [ 00:20:16.347 { 00:20:16.347 "method": "keyring_file_add_key", 00:20:16.347 "params": { 00:20:16.347 "name": "key0", 00:20:16.347 "path": "/tmp/tmp.AvJ35qsIO2" 00:20:16.347 } 00:20:16.347 } 00:20:16.347 ] 00:20:16.347 }, 00:20:16.347 { 00:20:16.347 "subsystem": "iobuf", 00:20:16.347 "config": [ 00:20:16.347 { 00:20:16.347 "method": "iobuf_set_options", 00:20:16.347 "params": { 00:20:16.347 "large_bufsize": 135168, 00:20:16.347 "large_pool_count": 1024, 00:20:16.347 "small_bufsize": 8192, 00:20:16.348 "small_pool_count": 8192 00:20:16.348 } 00:20:16.348 } 00:20:16.348 ] 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "subsystem": "sock", 00:20:16.348 "config": [ 00:20:16.348 { 00:20:16.348 "method": "sock_set_default_impl", 00:20:16.348 "params": { 00:20:16.348 "impl_name": "posix" 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "sock_impl_set_options", 00:20:16.348 "params": { 00:20:16.348 "enable_ktls": false, 00:20:16.348 "enable_placement_id": 0, 00:20:16.348 "enable_quickack": false, 00:20:16.348 "enable_recv_pipe": true, 00:20:16.348 "enable_zerocopy_send_client": false, 00:20:16.348 "enable_zerocopy_send_server": true, 00:20:16.348 "impl_name": "ssl", 00:20:16.348 "recv_buf_size": 4096, 00:20:16.348 "send_buf_size": 4096, 00:20:16.348 "tls_version": 0, 00:20:16.348 "zerocopy_threshold": 0 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "sock_impl_set_options", 00:20:16.348 "params": { 00:20:16.348 "enable_ktls": false, 00:20:16.348 "enable_placement_id": 0, 00:20:16.348 "enable_quickack": false, 00:20:16.348 "enable_recv_pipe": true, 00:20:16.348 "enable_zerocopy_send_client": false, 00:20:16.348 "enable_zerocopy_send_server": true, 00:20:16.348 "impl_name": "posix", 00:20:16.348 "recv_buf_size": 2097152, 00:20:16.348 "send_buf_size": 2097152, 00:20:16.348 "tls_version": 0, 00:20:16.348 "zerocopy_threshold": 0 00:20:16.348 } 00:20:16.348 } 00:20:16.348 ] 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "subsystem": "vmd", 00:20:16.348 "config": [] 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "subsystem": "accel", 00:20:16.348 "config": [ 00:20:16.348 { 00:20:16.348 "method": "accel_set_options", 00:20:16.348 "params": { 00:20:16.348 "buf_count": 2048, 00:20:16.348 "large_cache_size": 16, 00:20:16.348 "sequence_count": 2048, 00:20:16.348 "small_cache_size": 128, 00:20:16.348 "task_count": 2048 00:20:16.348 } 00:20:16.348 } 00:20:16.348 ] 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "subsystem": "bdev", 00:20:16.348 "config": [ 00:20:16.348 { 00:20:16.348 "method": "bdev_set_options", 00:20:16.348 "params": { 00:20:16.348 "bdev_auto_examine": true, 00:20:16.348 "bdev_io_cache_size": 256, 00:20:16.348 "bdev_io_pool_size": 65535, 00:20:16.348 "iobuf_large_cache_size": 16, 00:20:16.348 "iobuf_small_cache_size": 128 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_raid_set_options", 00:20:16.348 "params": { 00:20:16.348 "process_max_bandwidth_mb_sec": 0, 00:20:16.348 "process_window_size_kb": 1024 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_iscsi_set_options", 00:20:16.348 "params": { 00:20:16.348 "timeout_sec": 30 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_nvme_set_options", 00:20:16.348 "params": { 00:20:16.348 "action_on_timeout": "none", 00:20:16.348 "allow_accel_sequence": false, 00:20:16.348 "arbitration_burst": 0, 00:20:16.348 "bdev_retry_count": 3, 00:20:16.348 "ctrlr_loss_timeout_sec": 0, 00:20:16.348 "delay_cmd_submit": true, 00:20:16.348 "dhchap_dhgroups": [ 00:20:16.348 "null", 00:20:16.348 "ffdhe2048", 00:20:16.348 "ffdhe3072", 00:20:16.348 "ffdhe4096", 00:20:16.348 "ffdhe6144", 00:20:16.348 "ffdhe8192" 00:20:16.348 ], 00:20:16.348 "dhchap_digests": [ 00:20:16.348 "sha256", 00:20:16.348 "sha384", 00:20:16.348 "sha512" 00:20:16.348 ], 00:20:16.348 "disable_auto_failback": false, 00:20:16.348 "fast_io_fail_timeout_sec": 0, 00:20:16.348 "generate_uuids": false, 00:20:16.348 "high_priority_weight": 0, 00:20:16.348 "io_path_stat": false, 00:20:16.348 "io_queue_requests": 512, 00:20:16.348 "keep_alive_timeout_ms": 10000, 00:20:16.348 "low_priority_weight": 0, 00:20:16.348 "medium_priority_weight": 0, 00:20:16.348 "nvme_adminq_poll_period_us": 10000, 00:20:16.348 "nvme_error_stat": false, 00:20:16.348 "nvme_ioq_poll_period_us": 0, 00:20:16.348 "rdma_cm_event_timeout_ms": 0, 00:20:16.348 "rdma_max_cq_size": 0, 00:20:16.348 "rdma_srq_size": 0, 00:20:16.348 "reconnect_delay_sec": 0, 00:20:16.348 "timeout_admin_us": 0, 00:20:16.348 "timeout_us": 0, 00:20:16.348 "transport_ack_timeout": 0, 00:20:16.348 "transport_retry_count": 4, 00:20:16.348 "transport_tos": 0 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_nvme_attach_controller", 00:20:16.348 "params": { 00:20:16.348 "adrfam": "IPv4", 00:20:16.348 "ctrlr_loss_timeout_sec": 0, 00:20:16.348 "ddgst": false, 00:20:16.348 "fast_io_fail_timeout_sec": 0, 00:20:16.348 "hdgst": false, 00:20:16.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.348 "name": "nvme0", 00:20:16.348 "prchk_guard": false, 00:20:16.348 "prchk_reftag": false, 00:20:16.348 "psk": "key0", 00:20:16.348 "reconnect_delay_sec": 0, 00:20:16.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.348 "traddr": "10.0.0.2", 00:20:16.348 "trsvcid": "4420", 00:20:16.348 "trtype": "TCP" 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_nvme_set_hotplug", 00:20:16.348 "params": { 00:20:16.348 "enable": false, 00:20:16.348 "period_us": 100000 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_enable_histogram", 00:20:16.348 "params": { 00:20:16.348 "enable": true, 00:20:16.348 "name": "nvme0n1" 00:20:16.348 } 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "method": "bdev_wait_for_examine" 00:20:16.348 } 00:20:16.348 ] 00:20:16.348 }, 00:20:16.348 { 00:20:16.348 "subsystem": "nbd", 00:20:16.348 "config": [] 00:20:16.348 } 00:20:16.348 ] 00:20:16.348 }' 00:20:16.348 [2024-07-24 17:21:02.375019] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:16.348 [2024-07-24 17:21:02.375119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84620 ] 00:20:16.348 [2024-07-24 17:21:02.511220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.607 [2024-07-24 17:21:02.626696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.607 [2024-07-24 17:21:02.803886] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.171 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.171 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.171 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:17.171 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:20:17.429 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.429 17:21:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:17.429 Running I/O for 1 seconds... 00:20:18.800 00:20:18.800 Latency(us) 00:20:18.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.800 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:18.800 Verification LBA range: start 0x0 length 0x2000 00:20:18.800 nvme0n1 : 1.02 3782.75 14.78 0.00 0.00 33384.35 2085.24 24307.90 00:20:18.800 =================================================================================================================== 00:20:18.800 Total : 3782.75 14.78 0.00 0.00 33384.35 2085.24 24307.90 00:20:18.800 0 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.800 nvmf_trace.0 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84620 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84620 ']' 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84620 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84620 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:18.800 killing process with pid 84620 00:20:18.800 Received shutdown signal, test time was about 1.000000 seconds 00:20:18.800 00:20:18.800 Latency(us) 00:20:18.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.800 =================================================================================================================== 00:20:18.800 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84620' 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84620 00:20:18.800 17:21:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84620 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:19.071 rmmod nvme_tcp 00:20:19.071 rmmod nvme_fabrics 00:20:19.071 rmmod nvme_keyring 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84579 ']' 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84579 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84579 ']' 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84579 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84579 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:19.071 killing process with pid 84579 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84579' 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84579 00:20:19.071 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84579 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.PuYbP2rDVP /tmp/tmp.ZSt3ofVAtr /tmp/tmp.AvJ35qsIO2 00:20:19.349 00:20:19.349 real 1m27.177s 00:20:19.349 user 2m17.354s 00:20:19.349 sys 0m29.017s 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.349 ************************************ 00:20:19.349 END TEST nvmf_tls 00:20:19.349 ************************************ 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:19.349 ************************************ 00:20:19.349 START TEST nvmf_fips 00:20:19.349 ************************************ 00:20:19.349 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:19.607 * Looking for test storage... 00:20:19.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:20:19.607 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.607 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:19.607 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.607 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.608 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:19.609 Error setting digest 00:20:19.609 00F2021AEF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:19.609 00F2021AEF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.609 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:19.867 Cannot find device "nvmf_tgt_br" 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # true 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.867 Cannot find device "nvmf_tgt_br2" 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # true 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:19.867 Cannot find device "nvmf_tgt_br" 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # true 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:19.867 Cannot find device "nvmf_tgt_br2" 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # true 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.867 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.867 17:21:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:19.867 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:20.125 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.125 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:20:20.125 00:20:20.125 --- 10.0.0.2 ping statistics --- 00:20:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.125 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:20.125 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.125 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:20.125 00:20:20.125 --- 10.0.0.3 ping statistics --- 00:20:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.125 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.125 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:20.125 00:20:20.125 --- 10.0.0.1 ping statistics --- 00:20:20.125 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.125 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.125 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=84904 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 84904 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84904 ']' 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.126 17:21:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:20.126 [2024-07-24 17:21:06.277926] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:20.126 [2024-07-24 17:21:06.278048] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.384 [2024-07-24 17:21:06.418156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.384 [2024-07-24 17:21:06.539942] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.384 [2024-07-24 17:21:06.540039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.384 [2024-07-24 17:21:06.540068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.384 [2024-07-24 17:21:06.540079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.384 [2024-07-24 17:21:06.540123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.384 [2024-07-24 17:21:06.540159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:21.318 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.318 [2024-07-24 17:21:07.491688] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.318 [2024-07-24 17:21:07.507659] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.318 [2024-07-24 17:21:07.507850] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.318 [2024-07-24 17:21:07.539948] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.318 malloc0 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=84957 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 84957 /var/tmp/bdevperf.sock 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84957 ']' 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.577 17:21:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.577 [2024-07-24 17:21:07.651298] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:21.577 [2024-07-24 17:21:07.651421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84957 ] 00:20:21.577 [2024-07-24 17:21:07.792286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.836 [2024-07-24 17:21:07.896652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.402 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.402 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:22.402 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:22.660 [2024-07-24 17:21:08.812648] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.660 [2024-07-24 17:21:08.812767] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:22.660 TLSTESTn1 00:20:22.918 17:21:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:22.918 Running I/O for 10 seconds... 00:20:32.891 00:20:32.892 Latency(us) 00:20:32.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.892 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:32.892 Verification LBA range: start 0x0 length 0x2000 00:20:32.892 TLSTESTn1 : 10.03 4147.56 16.20 0.00 0.00 30799.60 7417.48 19541.64 00:20:32.892 =================================================================================================================== 00:20:32.892 Total : 4147.56 16.20 0.00 0.00 30799.60 7417.48 19541.64 00:20:32.892 0 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:32.892 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:32.892 nvmf_trace.0 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84957 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84957 ']' 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84957 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84957 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:33.151 killing process with pid 84957 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84957' 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84957 00:20:33.151 Received shutdown signal, test time was about 10.000000 seconds 00:20:33.151 00:20:33.151 Latency(us) 00:20:33.151 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:33.151 =================================================================================================================== 00:20:33.151 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:33.151 [2024-07-24 17:21:19.165581] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:33.151 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84957 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:33.410 rmmod nvme_tcp 00:20:33.410 rmmod nvme_fabrics 00:20:33.410 rmmod nvme_keyring 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 84904 ']' 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 84904 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84904 ']' 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84904 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84904 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:33.410 killing process with pid 84904 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84904' 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84904 00:20:33.410 [2024-07-24 17:21:19.520952] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:33.410 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84904 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:20:33.668 00:20:33.668 real 0m14.272s 00:20:33.668 user 0m18.910s 00:20:33.668 sys 0m5.995s 00:20:33.668 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.669 17:21:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:33.669 ************************************ 00:20:33.669 END TEST nvmf_fips 00:20:33.669 ************************************ 00:20:33.669 17:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:33.669 17:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ virt == phy ]] 00:20:33.669 17:21:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:20:33.669 00:20:33.669 real 6m29.804s 00:20:33.669 user 15m42.218s 00:20:33.669 sys 1m20.709s 00:20:33.669 17:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:33.669 17:21:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:33.669 ************************************ 00:20:33.669 END TEST nvmf_target_extra 00:20:33.669 ************************************ 00:20:33.669 17:21:19 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:33.669 17:21:19 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.669 17:21:19 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.669 17:21:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:33.669 ************************************ 00:20:33.669 START TEST nvmf_host 00:20:33.669 ************************************ 00:20:33.669 17:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:33.928 * Looking for test storage... 00:20:33.928 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:33.928 17:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.928 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:33.928 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:33.929 ************************************ 00:20:33.929 START TEST nvmf_multicontroller 00:20:33.929 ************************************ 00:20:33.929 17:21:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:33.929 * Looking for test storage... 00:20:33.929 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:33.929 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:33.930 Cannot find device "nvmf_tgt_br" 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:33.930 Cannot find device "nvmf_tgt_br2" 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:33.930 Cannot find device "nvmf_tgt_br" 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:20:33.930 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:34.189 Cannot find device "nvmf_tgt_br2" 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:34.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:34.189 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:34.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:20:34.189 00:20:34.189 --- 10.0.0.2 ping statistics --- 00:20:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.189 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:34.189 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:34.189 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:34.189 00:20:34.189 --- 10.0.0.3 ping statistics --- 00:20:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.189 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:34.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:34.189 00:20:34.189 --- 10.0.0.1 ping statistics --- 00:20:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.189 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.189 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=85354 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 85354 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85354 ']' 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:34.447 17:21:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:34.447 [2024-07-24 17:21:20.490666] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:34.447 [2024-07-24 17:21:20.490788] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.447 [2024-07-24 17:21:20.626802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:34.705 [2024-07-24 17:21:20.731487] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.706 [2024-07-24 17:21:20.731561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.706 [2024-07-24 17:21:20.731590] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.706 [2024-07-24 17:21:20.731597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.706 [2024-07-24 17:21:20.731605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.706 [2024-07-24 17:21:20.731758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.706 [2024-07-24 17:21:20.731889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.706 [2024-07-24 17:21:20.731908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.271 [2024-07-24 17:21:21.497205] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.271 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:35.272 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.272 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 Malloc0 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 [2024-07-24 17:21:21.559190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 [2024-07-24 17:21:21.567113] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 Malloc1 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85406 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85406 /var/tmp/bdevperf.sock 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 85406 ']' 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:35.530 17:21:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.466 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:36.466 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:20:36.466 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:36.466 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.466 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.725 NVMe0n1 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.725 1 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.725 2024/07/24 17:21:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:36.725 request: 00:20:36.725 { 00:20:36.725 "method": "bdev_nvme_attach_controller", 00:20:36.725 "params": { 00:20:36.725 "name": "NVMe0", 00:20:36.725 "trtype": "tcp", 00:20:36.725 "traddr": "10.0.0.2", 00:20:36.725 "adrfam": "ipv4", 00:20:36.725 "trsvcid": "4420", 00:20:36.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.725 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:36.725 "hostaddr": "10.0.0.2", 00:20:36.725 "hostsvcid": "60000", 00:20:36.725 "prchk_reftag": false, 00:20:36.725 "prchk_guard": false, 00:20:36.725 "hdgst": false, 00:20:36.725 "ddgst": false 00:20:36.725 } 00:20:36.725 } 00:20:36.725 Got JSON-RPC error response 00:20:36.725 GoRPCClient: error on JSON-RPC call 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.725 2024/07/24 17:21:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:36.725 request: 00:20:36.725 { 00:20:36.725 "method": "bdev_nvme_attach_controller", 00:20:36.725 "params": { 00:20:36.725 "name": "NVMe0", 00:20:36.725 "trtype": "tcp", 00:20:36.725 "traddr": "10.0.0.2", 00:20:36.725 "adrfam": "ipv4", 00:20:36.725 "trsvcid": "4420", 00:20:36.725 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.725 "hostaddr": "10.0.0.2", 00:20:36.725 "hostsvcid": "60000", 00:20:36.725 "prchk_reftag": false, 00:20:36.725 "prchk_guard": false, 00:20:36.725 "hdgst": false, 00:20:36.725 "ddgst": false 00:20:36.725 } 00:20:36.725 } 00:20:36.725 Got JSON-RPC error response 00:20:36.725 GoRPCClient: error on JSON-RPC call 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:36.725 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.726 2024/07/24 17:21:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:36.726 request: 00:20:36.726 { 00:20:36.726 "method": "bdev_nvme_attach_controller", 00:20:36.726 "params": { 00:20:36.726 "name": "NVMe0", 00:20:36.726 "trtype": "tcp", 00:20:36.726 "traddr": "10.0.0.2", 00:20:36.726 "adrfam": "ipv4", 00:20:36.726 "trsvcid": "4420", 00:20:36.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.726 "hostaddr": "10.0.0.2", 00:20:36.726 "hostsvcid": "60000", 00:20:36.726 "prchk_reftag": false, 00:20:36.726 "prchk_guard": false, 00:20:36.726 "hdgst": false, 00:20:36.726 "ddgst": false, 00:20:36.726 "multipath": "disable" 00:20:36.726 } 00:20:36.726 } 00:20:36.726 Got JSON-RPC error response 00:20:36.726 GoRPCClient: error on JSON-RPC call 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.726 2024/07/24 17:21:22 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:36.726 request: 00:20:36.726 { 00:20:36.726 "method": "bdev_nvme_attach_controller", 00:20:36.726 "params": { 00:20:36.726 "name": "NVMe0", 00:20:36.726 "trtype": "tcp", 00:20:36.726 "traddr": "10.0.0.2", 00:20:36.726 "adrfam": "ipv4", 00:20:36.726 "trsvcid": "4420", 00:20:36.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.726 "hostaddr": "10.0.0.2", 00:20:36.726 "hostsvcid": "60000", 00:20:36.726 "prchk_reftag": false, 00:20:36.726 "prchk_guard": false, 00:20:36.726 "hdgst": false, 00:20:36.726 "ddgst": false, 00:20:36.726 "multipath": "failover" 00:20:36.726 } 00:20:36.726 } 00:20:36.726 Got JSON-RPC error response 00:20:36.726 GoRPCClient: error on JSON-RPC call 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.726 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.726 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.985 00:20:36.985 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.985 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:36.985 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.985 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:36.985 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:36.985 17:21:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.985 17:21:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:36.985 17:21:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:37.931 0 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 85406 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85406 ']' 00:20:37.931 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85406 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85406 00:20:38.189 killing process with pid 85406 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85406' 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85406 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85406 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:38.189 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:38.449 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:38.449 [2024-07-24 17:21:21.688623] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:38.449 [2024-07-24 17:21:21.688815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85406 ] 00:20:38.449 [2024-07-24 17:21:21.826965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.449 [2024-07-24 17:21:21.948980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.449 [2024-07-24 17:21:22.977674] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 804badbd-737b-4ae1-ad68-5d4533fbb47c already exists 00:20:38.449 [2024-07-24 17:21:22.977731] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:804badbd-737b-4ae1-ad68-5d4533fbb47c alias for bdev NVMe1n1 00:20:38.449 [2024-07-24 17:21:22.977765] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:38.449 Running I/O for 1 seconds... 00:20:38.449 00:20:38.449 Latency(us) 00:20:38.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.449 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:38.449 NVMe0n1 : 1.01 20216.70 78.97 0.00 0.00 6321.18 3530.01 15371.17 00:20:38.449 =================================================================================================================== 00:20:38.449 Total : 20216.70 78.97 0.00 0.00 6321.18 3530.01 15371.17 00:20:38.449 Received shutdown signal, test time was about 1.000000 seconds 00:20:38.449 00:20:38.449 Latency(us) 00:20:38.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.449 =================================================================================================================== 00:20:38.449 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:38.449 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:38.449 rmmod nvme_tcp 00:20:38.449 rmmod nvme_fabrics 00:20:38.449 rmmod nvme_keyring 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 85354 ']' 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 85354 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 85354 ']' 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 85354 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85354 00:20:38.449 killing process with pid 85354 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85354' 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 85354 00:20:38.449 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 85354 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:38.708 00:20:38.708 real 0m4.894s 00:20:38.708 user 0m15.450s 00:20:38.708 sys 0m1.070s 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:38.708 ************************************ 00:20:38.708 END TEST nvmf_multicontroller 00:20:38.708 ************************************ 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:38.708 ************************************ 00:20:38.708 START TEST nvmf_aer 00:20:38.708 ************************************ 00:20:38.708 17:21:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:38.967 * Looking for test storage... 00:20:38.967 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.967 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:38.968 Cannot find device "nvmf_tgt_br" 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # true 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.968 Cannot find device "nvmf_tgt_br2" 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # true 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:38.968 Cannot find device "nvmf_tgt_br" 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # true 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:38.968 Cannot find device "nvmf_tgt_br2" 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # true 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.968 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:38.968 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.226 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.226 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.226 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.226 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.226 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:39.226 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:39.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:39.227 00:20:39.227 --- 10.0.0.2 ping statistics --- 00:20:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.227 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:39.227 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.227 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:39.227 00:20:39.227 --- 10.0.0.3 ping statistics --- 00:20:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.227 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:39.227 00:20:39.227 --- 10.0.0.1 ping statistics --- 00:20:39.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.227 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=85657 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 85657 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 85657 ']' 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:39.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:39.227 17:21:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:39.486 [2024-07-24 17:21:25.471208] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:39.486 [2024-07-24 17:21:25.471360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.486 [2024-07-24 17:21:25.609723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.486 [2024-07-24 17:21:25.719179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.486 [2024-07-24 17:21:25.719231] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.486 [2024-07-24 17:21:25.719242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.486 [2024-07-24 17:21:25.719251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.486 [2024-07-24 17:21:25.719258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.744 [2024-07-24 17:21:25.719599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.744 [2024-07-24 17:21:25.719729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.744 [2024-07-24 17:21:25.719892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.744 [2024-07-24 17:21:25.719895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.322 [2024-07-24 17:21:26.490930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.322 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.606 Malloc0 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.606 [2024-07-24 17:21:26.574814] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.606 [ 00:20:40.606 { 00:20:40.606 "allow_any_host": true, 00:20:40.606 "hosts": [], 00:20:40.606 "listen_addresses": [], 00:20:40.606 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:40.606 "subtype": "Discovery" 00:20:40.606 }, 00:20:40.606 { 00:20:40.606 "allow_any_host": true, 00:20:40.606 "hosts": [], 00:20:40.606 "listen_addresses": [ 00:20:40.606 { 00:20:40.606 "adrfam": "IPv4", 00:20:40.606 "traddr": "10.0.0.2", 00:20:40.606 "trsvcid": "4420", 00:20:40.606 "trtype": "TCP" 00:20:40.606 } 00:20:40.606 ], 00:20:40.606 "max_cntlid": 65519, 00:20:40.606 "max_namespaces": 2, 00:20:40.606 "min_cntlid": 1, 00:20:40.606 "model_number": "SPDK bdev Controller", 00:20:40.606 "namespaces": [ 00:20:40.606 { 00:20:40.606 "bdev_name": "Malloc0", 00:20:40.606 "name": "Malloc0", 00:20:40.606 "nguid": "E00D4490AA7D45778AD86DE953825914", 00:20:40.606 "nsid": 1, 00:20:40.606 "uuid": "e00d4490-aa7d-4577-8ad8-6de953825914" 00:20:40.606 } 00:20:40.606 ], 00:20:40.606 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.606 "serial_number": "SPDK00000000000001", 00:20:40.606 "subtype": "NVMe" 00:20:40.606 } 00:20:40.606 ] 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85711 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.606 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.865 Malloc1 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.865 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.865 Asynchronous Event Request test 00:20:40.865 Attaching to 10.0.0.2 00:20:40.865 Attached to 10.0.0.2 00:20:40.865 Registering asynchronous event callbacks... 00:20:40.865 Starting namespace attribute notice tests for all controllers... 00:20:40.865 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:40.865 aer_cb - Changed Namespace 00:20:40.865 Cleaning up... 00:20:40.865 [ 00:20:40.865 { 00:20:40.865 "allow_any_host": true, 00:20:40.865 "hosts": [], 00:20:40.865 "listen_addresses": [], 00:20:40.865 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:40.865 "subtype": "Discovery" 00:20:40.865 }, 00:20:40.865 { 00:20:40.865 "allow_any_host": true, 00:20:40.865 "hosts": [], 00:20:40.865 "listen_addresses": [ 00:20:40.865 { 00:20:40.865 "adrfam": "IPv4", 00:20:40.865 "traddr": "10.0.0.2", 00:20:40.865 "trsvcid": "4420", 00:20:40.865 "trtype": "TCP" 00:20:40.865 } 00:20:40.865 ], 00:20:40.865 "max_cntlid": 65519, 00:20:40.865 "max_namespaces": 2, 00:20:40.865 "min_cntlid": 1, 00:20:40.865 "model_number": "SPDK bdev Controller", 00:20:40.865 "namespaces": [ 00:20:40.865 { 00:20:40.865 "bdev_name": "Malloc0", 00:20:40.865 "name": "Malloc0", 00:20:40.865 "nguid": "E00D4490AA7D45778AD86DE953825914", 00:20:40.865 "nsid": 1, 00:20:40.865 "uuid": "e00d4490-aa7d-4577-8ad8-6de953825914" 00:20:40.865 }, 00:20:40.866 { 00:20:40.866 "bdev_name": "Malloc1", 00:20:40.866 "name": "Malloc1", 00:20:40.866 "nguid": "232805FE5F674F6497666BE1B1FCE60F", 00:20:40.866 "nsid": 2, 00:20:40.866 "uuid": "232805fe-5f67-4f64-9766-6be1b1fce60f" 00:20:40.866 } 00:20:40.866 ], 00:20:40.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.866 "serial_number": "SPDK00000000000001", 00:20:40.866 "subtype": "NVMe" 00:20:40.866 } 00:20:40.866 ] 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85711 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.866 17:21:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.866 rmmod nvme_tcp 00:20:40.866 rmmod nvme_fabrics 00:20:40.866 rmmod nvme_keyring 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 85657 ']' 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 85657 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 85657 ']' 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 85657 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85657 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:40.866 killing process with pid 85657 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85657' 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 85657 00:20:40.866 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 85657 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:41.125 00:20:41.125 real 0m2.422s 00:20:41.125 user 0m6.550s 00:20:41.125 sys 0m0.675s 00:20:41.125 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:41.385 ************************************ 00:20:41.385 END TEST nvmf_aer 00:20:41.385 ************************************ 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:41.385 ************************************ 00:20:41.385 START TEST nvmf_async_init 00:20:41.385 ************************************ 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:41.385 * Looking for test storage... 00:20:41.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=2287fb5136fd4d9b8126d39dd4c0093d 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:41.385 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:41.386 Cannot find device "nvmf_tgt_br" 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.386 Cannot find device "nvmf_tgt_br2" 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:41.386 Cannot find device "nvmf_tgt_br" 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:41.386 Cannot find device "nvmf_tgt_br2" 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:20:41.386 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.645 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:41.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:20:41.645 00:20:41.645 --- 10.0.0.2 ping statistics --- 00:20:41.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.645 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:41.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:20:41.645 00:20:41.645 --- 10.0.0.3 ping statistics --- 00:20:41.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.645 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:41.645 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:41.646 00:20:41.646 --- 10.0.0.1 ping statistics --- 00:20:41.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.646 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=85879 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 85879 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 85879 ']' 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:41.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:41.646 17:21:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:41.905 [2024-07-24 17:21:27.931656] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:41.905 [2024-07-24 17:21:27.931775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.905 [2024-07-24 17:21:28.072857] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.163 [2024-07-24 17:21:28.179828] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.163 [2024-07-24 17:21:28.179895] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.163 [2024-07-24 17:21:28.179922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.163 [2024-07-24 17:21:28.179930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.163 [2024-07-24 17:21:28.179937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.164 [2024-07-24 17:21:28.179963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 [2024-07-24 17:21:28.932418] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.731 null0 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.731 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2287fb5136fd4d9b8126d39dd4c0093d 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.732 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 [2024-07-24 17:21:28.972537] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 17:21:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 nvme0n1 00:20:42.991 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.991 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:42.991 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.991 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.991 [ 00:20:42.991 { 00:20:42.991 "aliases": [ 00:20:42.991 "2287fb51-36fd-4d9b-8126-d39dd4c0093d" 00:20:42.991 ], 00:20:42.991 "assigned_rate_limits": { 00:20:42.991 "r_mbytes_per_sec": 0, 00:20:42.991 "rw_ios_per_sec": 0, 00:20:42.991 "rw_mbytes_per_sec": 0, 00:20:42.991 "w_mbytes_per_sec": 0 00:20:42.991 }, 00:20:42.991 "block_size": 512, 00:20:42.991 "claimed": false, 00:20:42.991 "driver_specific": { 00:20:42.991 "mp_policy": "active_passive", 00:20:42.991 "nvme": [ 00:20:42.991 { 00:20:42.991 "ctrlr_data": { 00:20:42.991 "ana_reporting": false, 00:20:42.991 "cntlid": 1, 00:20:42.991 "firmware_revision": "24.09", 00:20:42.991 "model_number": "SPDK bdev Controller", 00:20:42.991 "multi_ctrlr": true, 00:20:42.991 "oacs": { 00:20:42.991 "firmware": 0, 00:20:42.991 "format": 0, 00:20:43.251 "ns_manage": 0, 00:20:43.251 "security": 0 00:20:43.251 }, 00:20:43.251 "serial_number": "00000000000000000000", 00:20:43.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.251 "vendor_id": "0x8086" 00:20:43.251 }, 00:20:43.251 "ns_data": { 00:20:43.251 "can_share": true, 00:20:43.251 "id": 1 00:20:43.251 }, 00:20:43.251 "trid": { 00:20:43.251 "adrfam": "IPv4", 00:20:43.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.251 "traddr": "10.0.0.2", 00:20:43.251 "trsvcid": "4420", 00:20:43.251 "trtype": "TCP" 00:20:43.251 }, 00:20:43.251 "vs": { 00:20:43.251 "nvme_version": "1.3" 00:20:43.251 } 00:20:43.251 } 00:20:43.251 ] 00:20:43.251 }, 00:20:43.251 "memory_domains": [ 00:20:43.251 { 00:20:43.251 "dma_device_id": "system", 00:20:43.251 "dma_device_type": 1 00:20:43.251 } 00:20:43.251 ], 00:20:43.251 "name": "nvme0n1", 00:20:43.251 "num_blocks": 2097152, 00:20:43.251 "product_name": "NVMe disk", 00:20:43.251 "supported_io_types": { 00:20:43.251 "abort": true, 00:20:43.251 "compare": true, 00:20:43.251 "compare_and_write": true, 00:20:43.251 "copy": true, 00:20:43.251 "flush": true, 00:20:43.251 "get_zone_info": false, 00:20:43.251 "nvme_admin": true, 00:20:43.251 "nvme_io": true, 00:20:43.251 "nvme_io_md": false, 00:20:43.251 "nvme_iov_md": false, 00:20:43.251 "read": true, 00:20:43.251 "reset": true, 00:20:43.251 "seek_data": false, 00:20:43.251 "seek_hole": false, 00:20:43.251 "unmap": false, 00:20:43.251 "write": true, 00:20:43.251 "write_zeroes": true, 00:20:43.251 "zcopy": false, 00:20:43.251 "zone_append": false, 00:20:43.251 "zone_management": false 00:20:43.251 }, 00:20:43.251 "uuid": "2287fb51-36fd-4d9b-8126-d39dd4c0093d", 00:20:43.251 "zoned": false 00:20:43.251 } 00:20:43.251 ] 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 [2024-07-24 17:21:29.241187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:43.251 [2024-07-24 17:21:29.241295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b4b00 (9): Bad file descriptor 00:20:43.251 [2024-07-24 17:21:29.373371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.251 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.251 [ 00:20:43.251 { 00:20:43.251 "aliases": [ 00:20:43.251 "2287fb51-36fd-4d9b-8126-d39dd4c0093d" 00:20:43.251 ], 00:20:43.251 "assigned_rate_limits": { 00:20:43.251 "r_mbytes_per_sec": 0, 00:20:43.251 "rw_ios_per_sec": 0, 00:20:43.251 "rw_mbytes_per_sec": 0, 00:20:43.251 "w_mbytes_per_sec": 0 00:20:43.251 }, 00:20:43.251 "block_size": 512, 00:20:43.251 "claimed": false, 00:20:43.251 "driver_specific": { 00:20:43.251 "mp_policy": "active_passive", 00:20:43.251 "nvme": [ 00:20:43.251 { 00:20:43.251 "ctrlr_data": { 00:20:43.251 "ana_reporting": false, 00:20:43.251 "cntlid": 2, 00:20:43.251 "firmware_revision": "24.09", 00:20:43.251 "model_number": "SPDK bdev Controller", 00:20:43.251 "multi_ctrlr": true, 00:20:43.251 "oacs": { 00:20:43.251 "firmware": 0, 00:20:43.251 "format": 0, 00:20:43.251 "ns_manage": 0, 00:20:43.251 "security": 0 00:20:43.251 }, 00:20:43.251 "serial_number": "00000000000000000000", 00:20:43.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.251 "vendor_id": "0x8086" 00:20:43.251 }, 00:20:43.251 "ns_data": { 00:20:43.251 "can_share": true, 00:20:43.251 "id": 1 00:20:43.251 }, 00:20:43.251 "trid": { 00:20:43.251 "adrfam": "IPv4", 00:20:43.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.251 "traddr": "10.0.0.2", 00:20:43.251 "trsvcid": "4420", 00:20:43.251 "trtype": "TCP" 00:20:43.251 }, 00:20:43.251 "vs": { 00:20:43.252 "nvme_version": "1.3" 00:20:43.252 } 00:20:43.252 } 00:20:43.252 ] 00:20:43.252 }, 00:20:43.252 "memory_domains": [ 00:20:43.252 { 00:20:43.252 "dma_device_id": "system", 00:20:43.252 "dma_device_type": 1 00:20:43.252 } 00:20:43.252 ], 00:20:43.252 "name": "nvme0n1", 00:20:43.252 "num_blocks": 2097152, 00:20:43.252 "product_name": "NVMe disk", 00:20:43.252 "supported_io_types": { 00:20:43.252 "abort": true, 00:20:43.252 "compare": true, 00:20:43.252 "compare_and_write": true, 00:20:43.252 "copy": true, 00:20:43.252 "flush": true, 00:20:43.252 "get_zone_info": false, 00:20:43.252 "nvme_admin": true, 00:20:43.252 "nvme_io": true, 00:20:43.252 "nvme_io_md": false, 00:20:43.252 "nvme_iov_md": false, 00:20:43.252 "read": true, 00:20:43.252 "reset": true, 00:20:43.252 "seek_data": false, 00:20:43.252 "seek_hole": false, 00:20:43.252 "unmap": false, 00:20:43.252 "write": true, 00:20:43.252 "write_zeroes": true, 00:20:43.252 "zcopy": false, 00:20:43.252 "zone_append": false, 00:20:43.252 "zone_management": false 00:20:43.252 }, 00:20:43.252 "uuid": "2287fb51-36fd-4d9b-8126-d39dd4c0093d", 00:20:43.252 "zoned": false 00:20:43.252 } 00:20:43.252 ] 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.3F2v6DpZOy 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.3F2v6DpZOy 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.252 [2024-07-24 17:21:29.437348] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.252 [2024-07-24 17:21:29.437583] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3F2v6DpZOy 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.252 [2024-07-24 17:21:29.445338] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3F2v6DpZOy 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.252 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.252 [2024-07-24 17:21:29.453347] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.252 [2024-07-24 17:21:29.453445] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.511 nvme0n1 00:20:43.511 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.511 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.511 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.511 [ 00:20:43.511 { 00:20:43.511 "aliases": [ 00:20:43.511 "2287fb51-36fd-4d9b-8126-d39dd4c0093d" 00:20:43.511 ], 00:20:43.511 "assigned_rate_limits": { 00:20:43.511 "r_mbytes_per_sec": 0, 00:20:43.511 "rw_ios_per_sec": 0, 00:20:43.511 "rw_mbytes_per_sec": 0, 00:20:43.511 "w_mbytes_per_sec": 0 00:20:43.511 }, 00:20:43.511 "block_size": 512, 00:20:43.511 "claimed": false, 00:20:43.511 "driver_specific": { 00:20:43.511 "mp_policy": "active_passive", 00:20:43.511 "nvme": [ 00:20:43.511 { 00:20:43.511 "ctrlr_data": { 00:20:43.511 "ana_reporting": false, 00:20:43.511 "cntlid": 3, 00:20:43.511 "firmware_revision": "24.09", 00:20:43.511 "model_number": "SPDK bdev Controller", 00:20:43.511 "multi_ctrlr": true, 00:20:43.511 "oacs": { 00:20:43.511 "firmware": 0, 00:20:43.511 "format": 0, 00:20:43.511 "ns_manage": 0, 00:20:43.511 "security": 0 00:20:43.511 }, 00:20:43.511 "serial_number": "00000000000000000000", 00:20:43.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.511 "vendor_id": "0x8086" 00:20:43.511 }, 00:20:43.511 "ns_data": { 00:20:43.511 "can_share": true, 00:20:43.511 "id": 1 00:20:43.511 }, 00:20:43.511 "trid": { 00:20:43.511 "adrfam": "IPv4", 00:20:43.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.511 "traddr": "10.0.0.2", 00:20:43.511 "trsvcid": "4421", 00:20:43.511 "trtype": "TCP" 00:20:43.511 }, 00:20:43.511 "vs": { 00:20:43.511 "nvme_version": "1.3" 00:20:43.511 } 00:20:43.511 } 00:20:43.511 ] 00:20:43.511 }, 00:20:43.511 "memory_domains": [ 00:20:43.511 { 00:20:43.511 "dma_device_id": "system", 00:20:43.511 "dma_device_type": 1 00:20:43.511 } 00:20:43.511 ], 00:20:43.511 "name": "nvme0n1", 00:20:43.511 "num_blocks": 2097152, 00:20:43.511 "product_name": "NVMe disk", 00:20:43.511 "supported_io_types": { 00:20:43.511 "abort": true, 00:20:43.511 "compare": true, 00:20:43.511 "compare_and_write": true, 00:20:43.511 "copy": true, 00:20:43.511 "flush": true, 00:20:43.511 "get_zone_info": false, 00:20:43.511 "nvme_admin": true, 00:20:43.511 "nvme_io": true, 00:20:43.511 "nvme_io_md": false, 00:20:43.511 "nvme_iov_md": false, 00:20:43.511 "read": true, 00:20:43.511 "reset": true, 00:20:43.511 "seek_data": false, 00:20:43.511 "seek_hole": false, 00:20:43.511 "unmap": false, 00:20:43.511 "write": true, 00:20:43.511 "write_zeroes": true, 00:20:43.511 "zcopy": false, 00:20:43.511 "zone_append": false, 00:20:43.511 "zone_management": false 00:20:43.511 }, 00:20:43.511 "uuid": "2287fb51-36fd-4d9b-8126-d39dd4c0093d", 00:20:43.511 "zoned": false 00:20:43.511 } 00:20:43.511 ] 00:20:43.511 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.511 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.3F2v6DpZOy 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.512 rmmod nvme_tcp 00:20:43.512 rmmod nvme_fabrics 00:20:43.512 rmmod nvme_keyring 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 85879 ']' 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 85879 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 85879 ']' 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 85879 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85879 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:43.512 killing process with pid 85879 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85879' 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 85879 00:20:43.512 [2024-07-24 17:21:29.697181] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.512 [2024-07-24 17:21:29.697228] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.512 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 85879 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:43.771 00:20:43.771 real 0m2.529s 00:20:43.771 user 0m2.362s 00:20:43.771 sys 0m0.601s 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.771 ************************************ 00:20:43.771 END TEST nvmf_async_init 00:20:43.771 ************************************ 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.771 ************************************ 00:20:43.771 START TEST dma 00:20:43.771 ************************************ 00:20:43.771 17:21:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:44.030 * Looking for test storage... 00:20:44.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:44.030 00:20:44.030 real 0m0.102s 00:20:44.030 user 0m0.047s 00:20:44.030 sys 0m0.062s 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:44.030 ************************************ 00:20:44.030 END TEST dma 00:20:44.030 ************************************ 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.030 ************************************ 00:20:44.030 START TEST nvmf_identify 00:20:44.030 ************************************ 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:44.030 * Looking for test storage... 00:20:44.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.030 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:44.031 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:44.290 Cannot find device "nvmf_tgt_br" 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # true 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:44.290 Cannot find device "nvmf_tgt_br2" 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # true 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:44.290 Cannot find device "nvmf_tgt_br" 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # true 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:44.290 Cannot find device "nvmf_tgt_br2" 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # true 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:44.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:44.290 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.290 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:44.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:20:44.549 00:20:44.549 --- 10.0.0.2 ping statistics --- 00:20:44.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.549 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:44.549 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.549 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:44.549 00:20:44.549 --- 10.0.0.3 ping statistics --- 00:20:44.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.549 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:20:44.549 00:20:44.549 --- 10.0.0.1 ping statistics --- 00:20:44.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.549 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:44.549 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86148 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86148 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 86148 ']' 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.550 17:21:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:44.550 [2024-07-24 17:21:30.649770] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:44.550 [2024-07-24 17:21:30.649874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.809 [2024-07-24 17:21:30.788413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:44.809 [2024-07-24 17:21:30.897042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.809 [2024-07-24 17:21:30.897130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.809 [2024-07-24 17:21:30.897142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.809 [2024-07-24 17:21:30.897150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.809 [2024-07-24 17:21:30.897157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.809 [2024-07-24 17:21:30.897303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.809 [2024-07-24 17:21:30.897378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.809 [2024-07-24 17:21:30.898809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:44.809 [2024-07-24 17:21:30.898908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.375 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.375 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:45.375 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:45.375 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.375 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 [2024-07-24 17:21:31.613191] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 Malloc0 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 [2024-07-24 17:21:31.712809] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:45.633 [ 00:20:45.633 { 00:20:45.633 "allow_any_host": true, 00:20:45.633 "hosts": [], 00:20:45.633 "listen_addresses": [ 00:20:45.633 { 00:20:45.633 "adrfam": "IPv4", 00:20:45.633 "traddr": "10.0.0.2", 00:20:45.633 "trsvcid": "4420", 00:20:45.633 "trtype": "TCP" 00:20:45.633 } 00:20:45.633 ], 00:20:45.633 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:45.633 "subtype": "Discovery" 00:20:45.633 }, 00:20:45.633 { 00:20:45.633 "allow_any_host": true, 00:20:45.633 "hosts": [], 00:20:45.633 "listen_addresses": [ 00:20:45.633 { 00:20:45.633 "adrfam": "IPv4", 00:20:45.633 "traddr": "10.0.0.2", 00:20:45.633 "trsvcid": "4420", 00:20:45.633 "trtype": "TCP" 00:20:45.633 } 00:20:45.633 ], 00:20:45.633 "max_cntlid": 65519, 00:20:45.633 "max_namespaces": 32, 00:20:45.633 "min_cntlid": 1, 00:20:45.633 "model_number": "SPDK bdev Controller", 00:20:45.633 "namespaces": [ 00:20:45.633 { 00:20:45.633 "bdev_name": "Malloc0", 00:20:45.633 "eui64": "ABCDEF0123456789", 00:20:45.633 "name": "Malloc0", 00:20:45.633 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:45.633 "nsid": 1, 00:20:45.633 "uuid": "03f56112-630b-4b2a-9d89-6ede1c62b499" 00:20:45.633 } 00:20:45.633 ], 00:20:45.633 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.633 "serial_number": "SPDK00000000000001", 00:20:45.633 "subtype": "NVMe" 00:20:45.633 } 00:20:45.633 ] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.633 17:21:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:45.633 [2024-07-24 17:21:31.763040] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:45.633 [2024-07-24 17:21:31.763086] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:20:45.895 [2024-07-24 17:21:31.899312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:45.895 [2024-07-24 17:21:31.899389] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:45.895 [2024-07-24 17:21:31.899396] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:45.895 [2024-07-24 17:21:31.899410] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:45.895 [2024-07-24 17:21:31.899420] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:45.895 [2024-07-24 17:21:31.899579] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:45.895 [2024-07-24 17:21:31.899663] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1567a60 0 00:20:45.895 [2024-07-24 17:21:31.904123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:45.895 [2024-07-24 17:21:31.904149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:45.895 [2024-07-24 17:21:31.904156] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:45.895 [2024-07-24 17:21:31.904159] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:45.895 [2024-07-24 17:21:31.904210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.904218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.904223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.895 [2024-07-24 17:21:31.904239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:45.895 [2024-07-24 17:21:31.904272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.895 [2024-07-24 17:21:31.912117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.895 [2024-07-24 17:21:31.912140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.895 [2024-07-24 17:21:31.912145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912150] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.895 [2024-07-24 17:21:31.912166] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:45.895 [2024-07-24 17:21:31.912175] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:45.895 [2024-07-24 17:21:31.912182] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:45.895 [2024-07-24 17:21:31.912201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.895 [2024-07-24 17:21:31.912222] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.895 [2024-07-24 17:21:31.912254] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.895 [2024-07-24 17:21:31.912332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.895 [2024-07-24 17:21:31.912339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.895 [2024-07-24 17:21:31.912343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.895 [2024-07-24 17:21:31.912354] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:45.895 [2024-07-24 17:21:31.912367] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:45.895 [2024-07-24 17:21:31.912380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.895 [2024-07-24 17:21:31.912407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.895 [2024-07-24 17:21:31.912439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.895 [2024-07-24 17:21:31.912499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.895 [2024-07-24 17:21:31.912511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.895 [2024-07-24 17:21:31.912518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912525] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.895 [2024-07-24 17:21:31.912535] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:45.895 [2024-07-24 17:21:31.912550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:45.895 [2024-07-24 17:21:31.912559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912568] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.895 [2024-07-24 17:21:31.912577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.895 [2024-07-24 17:21:31.912601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.895 [2024-07-24 17:21:31.912658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.895 [2024-07-24 17:21:31.912671] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.895 [2024-07-24 17:21:31.912679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912686] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.895 [2024-07-24 17:21:31.912695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:45.895 [2024-07-24 17:21:31.912713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912723] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.895 [2024-07-24 17:21:31.912740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.895 [2024-07-24 17:21:31.912768] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.895 [2024-07-24 17:21:31.912822] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.895 [2024-07-24 17:21:31.912831] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.895 [2024-07-24 17:21:31.912835] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.895 [2024-07-24 17:21:31.912845] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:45.895 [2024-07-24 17:21:31.912851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:45.895 [2024-07-24 17:21:31.912860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:45.895 [2024-07-24 17:21:31.912967] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:45.895 [2024-07-24 17:21:31.912980] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:45.895 [2024-07-24 17:21:31.912991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.912996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.913000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.895 [2024-07-24 17:21:31.913009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.895 [2024-07-24 17:21:31.913033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.895 [2024-07-24 17:21:31.913092] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.895 [2024-07-24 17:21:31.913118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.895 [2024-07-24 17:21:31.913123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.913127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.895 [2024-07-24 17:21:31.913133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:45.895 [2024-07-24 17:21:31.913146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.895 [2024-07-24 17:21:31.913151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.896 [2024-07-24 17:21:31.913187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.896 [2024-07-24 17:21:31.913247] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.896 [2024-07-24 17:21:31.913259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.896 [2024-07-24 17:21:31.913263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.896 [2024-07-24 17:21:31.913272] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:45.896 [2024-07-24 17:21:31.913278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:45.896 [2024-07-24 17:21:31.913288] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:45.896 [2024-07-24 17:21:31.913300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:45.896 [2024-07-24 17:21:31.913316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913325] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.896 [2024-07-24 17:21:31.913368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.896 [2024-07-24 17:21:31.913482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.896 [2024-07-24 17:21:31.913497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.896 [2024-07-24 17:21:31.913505] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913509] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1567a60): datao=0, datal=4096, cccid=0 00:20:45.896 [2024-07-24 17:21:31.913515] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15aa840) on tqpair(0x1567a60): expected_datao=0, payload_size=4096 00:20:45.896 [2024-07-24 17:21:31.913520] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913529] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913534] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.896 [2024-07-24 17:21:31.913557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.896 [2024-07-24 17:21:31.913564] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.896 [2024-07-24 17:21:31.913584] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:45.896 [2024-07-24 17:21:31.913590] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:45.896 [2024-07-24 17:21:31.913595] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:45.896 [2024-07-24 17:21:31.913606] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:45.896 [2024-07-24 17:21:31.913612] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:45.896 [2024-07-24 17:21:31.913617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:45.896 [2024-07-24 17:21:31.913631] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:45.896 [2024-07-24 17:21:31.913645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913669] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:45.896 [2024-07-24 17:21:31.913695] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.896 [2024-07-24 17:21:31.913760] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.896 [2024-07-24 17:21:31.913771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.896 [2024-07-24 17:21:31.913778] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.896 [2024-07-24 17:21:31.913797] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.896 [2024-07-24 17:21:31.913830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.896 [2024-07-24 17:21:31.913851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.896 [2024-07-24 17:21:31.913871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913879] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.896 [2024-07-24 17:21:31.913894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:45.896 [2024-07-24 17:21:31.913909] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:45.896 [2024-07-24 17:21:31.913921] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.913929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.913940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.896 [2024-07-24 17:21:31.913973] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa840, cid 0, qid 0 00:20:45.896 [2024-07-24 17:21:31.913983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aa9c0, cid 1, qid 0 00:20:45.896 [2024-07-24 17:21:31.913990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aab40, cid 2, qid 0 00:20:45.896 [2024-07-24 17:21:31.913998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.896 [2024-07-24 17:21:31.914005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aae40, cid 4, qid 0 00:20:45.896 [2024-07-24 17:21:31.914095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.896 [2024-07-24 17:21:31.914127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.896 [2024-07-24 17:21:31.914135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aae40) on tqpair=0x1567a60 00:20:45.896 [2024-07-24 17:21:31.914151] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:45.896 [2024-07-24 17:21:31.914161] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:45.896 [2024-07-24 17:21:31.914180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914186] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.914195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.896 [2024-07-24 17:21:31.914220] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aae40, cid 4, qid 0 00:20:45.896 [2024-07-24 17:21:31.914296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.896 [2024-07-24 17:21:31.914307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.896 [2024-07-24 17:21:31.914311] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914315] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1567a60): datao=0, datal=4096, cccid=4 00:20:45.896 [2024-07-24 17:21:31.914320] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15aae40) on tqpair(0x1567a60): expected_datao=0, payload_size=4096 00:20:45.896 [2024-07-24 17:21:31.914325] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914333] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914337] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.896 [2024-07-24 17:21:31.914352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.896 [2024-07-24 17:21:31.914358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aae40) on tqpair=0x1567a60 00:20:45.896 [2024-07-24 17:21:31.914385] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:45.896 [2024-07-24 17:21:31.914422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1567a60) 00:20:45.896 [2024-07-24 17:21:31.914438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.896 [2024-07-24 17:21:31.914447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.896 [2024-07-24 17:21:31.914455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1567a60) 00:20:45.897 [2024-07-24 17:21:31.914461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:45.897 [2024-07-24 17:21:31.914491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aae40, cid 4, qid 0 00:20:45.897 [2024-07-24 17:21:31.914502] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aafc0, cid 5, qid 0 00:20:45.897 [2024-07-24 17:21:31.914605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.897 [2024-07-24 17:21:31.914633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.897 [2024-07-24 17:21:31.914638] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.914642] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1567a60): datao=0, datal=1024, cccid=4 00:20:45.897 [2024-07-24 17:21:31.914648] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15aae40) on tqpair(0x1567a60): expected_datao=0, payload_size=1024 00:20:45.897 [2024-07-24 17:21:31.914653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.914663] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.914671] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.914680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.897 [2024-07-24 17:21:31.914689] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.897 [2024-07-24 17:21:31.914697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.914704] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aafc0) on tqpair=0x1567a60 00:20:45.897 [2024-07-24 17:21:31.955174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.897 [2024-07-24 17:21:31.955197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.897 [2024-07-24 17:21:31.955202] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aae40) on tqpair=0x1567a60 00:20:45.897 [2024-07-24 17:21:31.955223] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1567a60) 00:20:45.897 [2024-07-24 17:21:31.955239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.897 [2024-07-24 17:21:31.955275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aae40, cid 4, qid 0 00:20:45.897 [2024-07-24 17:21:31.955350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.897 [2024-07-24 17:21:31.955358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.897 [2024-07-24 17:21:31.955361] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955365] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1567a60): datao=0, datal=3072, cccid=4 00:20:45.897 [2024-07-24 17:21:31.955370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15aae40) on tqpair(0x1567a60): expected_datao=0, payload_size=3072 00:20:45.897 [2024-07-24 17:21:31.955375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955383] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955388] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955398] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.897 [2024-07-24 17:21:31.955408] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.897 [2024-07-24 17:21:31.955414] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955421] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aae40) on tqpair=0x1567a60 00:20:45.897 [2024-07-24 17:21:31.955438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955448] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1567a60) 00:20:45.897 [2024-07-24 17:21:31.955460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.897 [2024-07-24 17:21:31.955508] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aae40, cid 4, qid 0 00:20:45.897 [2024-07-24 17:21:31.955576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:45.897 [2024-07-24 17:21:31.955583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:45.897 [2024-07-24 17:21:31.955587] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955590] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1567a60): datao=0, datal=8, cccid=4 00:20:45.897 [2024-07-24 17:21:31.955595] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15aae40) on tqpair(0x1567a60): expected_datao=0, payload_size=8 00:20:45.897 [2024-07-24 17:21:31.955600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955607] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:31.955611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:45.897 ===================================================== 00:20:45.897 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:45.897 ===================================================== 00:20:45.897 Controller Capabilities/Features 00:20:45.897 ================================ 00:20:45.897 Vendor ID: 0000 00:20:45.897 Subsystem Vendor ID: 0000 00:20:45.897 Serial Number: .................... 00:20:45.897 Model Number: ........................................ 00:20:45.897 Firmware Version: 24.09 00:20:45.897 Recommended Arb Burst: 0 00:20:45.897 IEEE OUI Identifier: 00 00 00 00:20:45.897 Multi-path I/O 00:20:45.897 May have multiple subsystem ports: No 00:20:45.897 May have multiple controllers: No 00:20:45.897 Associated with SR-IOV VF: No 00:20:45.897 Max Data Transfer Size: 131072 00:20:45.897 Max Number of Namespaces: 0 00:20:45.897 Max Number of I/O Queues: 1024 00:20:45.897 NVMe Specification Version (VS): 1.3 00:20:45.897 NVMe Specification Version (Identify): 1.3 00:20:45.897 Maximum Queue Entries: 128 00:20:45.897 Contiguous Queues Required: Yes 00:20:45.897 Arbitration Mechanisms Supported 00:20:45.897 Weighted Round Robin: Not Supported 00:20:45.897 Vendor Specific: Not Supported 00:20:45.897 Reset Timeout: 15000 ms 00:20:45.897 Doorbell Stride: 4 bytes 00:20:45.897 NVM Subsystem Reset: Not Supported 00:20:45.897 Command Sets Supported 00:20:45.897 NVM Command Set: Supported 00:20:45.897 Boot Partition: Not Supported 00:20:45.897 Memory Page Size Minimum: 4096 bytes 00:20:45.897 Memory Page Size Maximum: 4096 bytes 00:20:45.897 Persistent Memory Region: Not Supported 00:20:45.897 Optional Asynchronous Events Supported 00:20:45.897 Namespace Attribute Notices: Not Supported 00:20:45.897 Firmware Activation Notices: Not Supported 00:20:45.897 ANA Change Notices: Not Supported 00:20:45.897 PLE Aggregate Log Change Notices: Not Supported 00:20:45.897 LBA Status Info Alert Notices: Not Supported 00:20:45.897 EGE Aggregate Log Change Notices: Not Supported 00:20:45.897 Normal NVM Subsystem Shutdown event: Not Supported 00:20:45.897 Zone Descriptor Change Notices: Not Supported 00:20:45.897 Discovery Log Change Notices: Supported 00:20:45.897 Controller Attributes 00:20:45.897 128-bit Host Identifier: Not Supported 00:20:45.897 Non-Operational Permissive Mode: Not Supported 00:20:45.897 NVM Sets: Not Supported 00:20:45.897 Read Recovery Levels: Not Supported 00:20:45.897 Endurance Groups: Not Supported 00:20:45.897 Predictable Latency Mode: Not Supported 00:20:45.897 Traffic Based Keep ALive: Not Supported 00:20:45.897 Namespace Granularity: Not Supported 00:20:45.897 SQ Associations: Not Supported 00:20:45.897 UUID List: Not Supported 00:20:45.897 Multi-Domain Subsystem: Not Supported 00:20:45.897 Fixed Capacity Management: Not Supported 00:20:45.897 Variable Capacity Management: Not Supported 00:20:45.897 Delete Endurance Group: Not Supported 00:20:45.897 Delete NVM Set: Not Supported 00:20:45.897 Extended LBA Formats Supported: Not Supported 00:20:45.897 Flexible Data Placement Supported: Not Supported 00:20:45.897 00:20:45.897 Controller Memory Buffer Support 00:20:45.897 ================================ 00:20:45.897 Supported: No 00:20:45.897 00:20:45.897 Persistent Memory Region Support 00:20:45.897 ================================ 00:20:45.897 Supported: No 00:20:45.897 00:20:45.897 Admin Command Set Attributes 00:20:45.897 ============================ 00:20:45.897 Security Send/Receive: Not Supported 00:20:45.897 Format NVM: Not Supported 00:20:45.897 Firmware Activate/Download: Not Supported 00:20:45.897 Namespace Management: Not Supported 00:20:45.897 Device Self-Test: Not Supported 00:20:45.897 Directives: Not Supported 00:20:45.897 NVMe-MI: Not Supported 00:20:45.897 Virtualization Management: Not Supported 00:20:45.897 Doorbell Buffer Config: Not Supported 00:20:45.897 Get LBA Status Capability: Not Supported 00:20:45.897 Command & Feature Lockdown Capability: Not Supported 00:20:45.897 Abort Command Limit: 1 00:20:45.897 Async Event Request Limit: 4 00:20:45.897 Number of Firmware Slots: N/A 00:20:45.897 Firmware Slot 1 Read-Only: N/A 00:20:45.897 Firm[2024-07-24 17:21:32.000203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.897 [2024-07-24 17:21:32.000228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.897 [2024-07-24 17:21:32.000234] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.897 [2024-07-24 17:21:32.000239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aae40) on tqpair=0x1567a60 00:20:45.897 ware Activation Without Reset: N/A 00:20:45.897 Multiple Update Detection Support: N/A 00:20:45.897 Firmware Update Granularity: No Information Provided 00:20:45.897 Per-Namespace SMART Log: No 00:20:45.897 Asymmetric Namespace Access Log Page: Not Supported 00:20:45.897 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:45.897 Command Effects Log Page: Not Supported 00:20:45.897 Get Log Page Extended Data: Supported 00:20:45.898 Telemetry Log Pages: Not Supported 00:20:45.898 Persistent Event Log Pages: Not Supported 00:20:45.898 Supported Log Pages Log Page: May Support 00:20:45.898 Commands Supported & Effects Log Page: Not Supported 00:20:45.898 Feature Identifiers & Effects Log Page:May Support 00:20:45.898 NVMe-MI Commands & Effects Log Page: May Support 00:20:45.898 Data Area 4 for Telemetry Log: Not Supported 00:20:45.898 Error Log Page Entries Supported: 128 00:20:45.898 Keep Alive: Not Supported 00:20:45.898 00:20:45.898 NVM Command Set Attributes 00:20:45.898 ========================== 00:20:45.898 Submission Queue Entry Size 00:20:45.898 Max: 1 00:20:45.898 Min: 1 00:20:45.898 Completion Queue Entry Size 00:20:45.898 Max: 1 00:20:45.898 Min: 1 00:20:45.898 Number of Namespaces: 0 00:20:45.898 Compare Command: Not Supported 00:20:45.898 Write Uncorrectable Command: Not Supported 00:20:45.898 Dataset Management Command: Not Supported 00:20:45.898 Write Zeroes Command: Not Supported 00:20:45.898 Set Features Save Field: Not Supported 00:20:45.898 Reservations: Not Supported 00:20:45.898 Timestamp: Not Supported 00:20:45.898 Copy: Not Supported 00:20:45.898 Volatile Write Cache: Not Present 00:20:45.898 Atomic Write Unit (Normal): 1 00:20:45.898 Atomic Write Unit (PFail): 1 00:20:45.898 Atomic Compare & Write Unit: 1 00:20:45.898 Fused Compare & Write: Supported 00:20:45.898 Scatter-Gather List 00:20:45.898 SGL Command Set: Supported 00:20:45.898 SGL Keyed: Supported 00:20:45.898 SGL Bit Bucket Descriptor: Not Supported 00:20:45.898 SGL Metadata Pointer: Not Supported 00:20:45.898 Oversized SGL: Not Supported 00:20:45.898 SGL Metadata Address: Not Supported 00:20:45.898 SGL Offset: Supported 00:20:45.898 Transport SGL Data Block: Not Supported 00:20:45.898 Replay Protected Memory Block: Not Supported 00:20:45.898 00:20:45.898 Firmware Slot Information 00:20:45.898 ========================= 00:20:45.898 Active slot: 0 00:20:45.898 00:20:45.898 00:20:45.898 Error Log 00:20:45.898 ========= 00:20:45.898 00:20:45.898 Active Namespaces 00:20:45.898 ================= 00:20:45.898 Discovery Log Page 00:20:45.898 ================== 00:20:45.898 Generation Counter: 2 00:20:45.898 Number of Records: 2 00:20:45.898 Record Format: 0 00:20:45.898 00:20:45.898 Discovery Log Entry 0 00:20:45.898 ---------------------- 00:20:45.898 Transport Type: 3 (TCP) 00:20:45.898 Address Family: 1 (IPv4) 00:20:45.898 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:45.898 Entry Flags: 00:20:45.898 Duplicate Returned Information: 1 00:20:45.898 Explicit Persistent Connection Support for Discovery: 1 00:20:45.898 Transport Requirements: 00:20:45.898 Secure Channel: Not Required 00:20:45.898 Port ID: 0 (0x0000) 00:20:45.898 Controller ID: 65535 (0xffff) 00:20:45.898 Admin Max SQ Size: 128 00:20:45.898 Transport Service Identifier: 4420 00:20:45.898 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:45.898 Transport Address: 10.0.0.2 00:20:45.898 Discovery Log Entry 1 00:20:45.898 ---------------------- 00:20:45.898 Transport Type: 3 (TCP) 00:20:45.898 Address Family: 1 (IPv4) 00:20:45.898 Subsystem Type: 2 (NVM Subsystem) 00:20:45.898 Entry Flags: 00:20:45.898 Duplicate Returned Information: 0 00:20:45.898 Explicit Persistent Connection Support for Discovery: 0 00:20:45.898 Transport Requirements: 00:20:45.898 Secure Channel: Not Required 00:20:45.898 Port ID: 0 (0x0000) 00:20:45.898 Controller ID: 65535 (0xffff) 00:20:45.898 Admin Max SQ Size: 128 00:20:45.898 Transport Service Identifier: 4420 00:20:45.898 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:45.898 Transport Address: 10.0.0.2 [2024-07-24 17:21:32.000355] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:45.898 [2024-07-24 17:21:32.000372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa840) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.898 [2024-07-24 17:21:32.000386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aa9c0) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.898 [2024-07-24 17:21:32.000397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aab40) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.898 [2024-07-24 17:21:32.000407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:45.898 [2024-07-24 17:21:32.000422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000427] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.898 [2024-07-24 17:21:32.000441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.898 [2024-07-24 17:21:32.000469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.898 [2024-07-24 17:21:32.000531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.898 [2024-07-24 17:21:32.000545] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.898 [2024-07-24 17:21:32.000552] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000590] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.898 [2024-07-24 17:21:32.000607] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.898 [2024-07-24 17:21:32.000639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.898 [2024-07-24 17:21:32.000717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.898 [2024-07-24 17:21:32.000732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.898 [2024-07-24 17:21:32.000739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000752] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:45.898 [2024-07-24 17:21:32.000757] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:45.898 [2024-07-24 17:21:32.000771] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.898 [2024-07-24 17:21:32.000789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.898 [2024-07-24 17:21:32.000815] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.898 [2024-07-24 17:21:32.000870] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.898 [2024-07-24 17:21:32.000884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.898 [2024-07-24 17:21:32.000888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000893] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.000906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000914] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.000920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.898 [2024-07-24 17:21:32.000932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.898 [2024-07-24 17:21:32.000962] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.898 [2024-07-24 17:21:32.001014] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.898 [2024-07-24 17:21:32.001023] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.898 [2024-07-24 17:21:32.001029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.001036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.898 [2024-07-24 17:21:32.001054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.001064] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.898 [2024-07-24 17:21:32.001071] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.898 [2024-07-24 17:21:32.001079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.898 [2024-07-24 17:21:32.001135] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.898 [2024-07-24 17:21:32.001187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.898 [2024-07-24 17:21:32.001196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.001200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.001217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.001239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.001275] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.001327] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.001338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.001344] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001351] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.001370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001379] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.001399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.001428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.001482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.001495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.001500] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.001517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001537] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.001549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.001572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.001623] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.001633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.001636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001641] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.001652] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001661] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.001671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.001702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.001755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.001763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.001766] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.001782] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001790] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.001807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.001839] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.001891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.001903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.001909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001916] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.001933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.001960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.001968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.001991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.002049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.002058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.002062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.002079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.002096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.002118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.002184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.002196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.002199] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.002216] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.002234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.002257] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.002309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.002319] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.002323] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.002339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002348] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.899 [2024-07-24 17:21:32.002356] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.899 [2024-07-24 17:21:32.002379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.899 [2024-07-24 17:21:32.002434] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.899 [2024-07-24 17:21:32.002447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.899 [2024-07-24 17:21:32.002452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.899 [2024-07-24 17:21:32.002469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.899 [2024-07-24 17:21:32.002474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.002491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.002521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.002580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.002593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.002597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.002615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.002638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.002668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.002720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.002731] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.002734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.002751] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002759] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002765] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.002777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.002807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.002863] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.002873] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.002889] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.002907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.002916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.002924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.002948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003024] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003046] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003050] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003163] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003166] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003183] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003235] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003297] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003301] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003306] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003317] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003446] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003464] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003522] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003594] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003607] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003721] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003737] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.003875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.003884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.003892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.003916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.003972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.003986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.003993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.004001] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.004014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.004019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.004023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.900 [2024-07-24 17:21:32.004031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.900 [2024-07-24 17:21:32.004055] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.900 [2024-07-24 17:21:32.008130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.900 [2024-07-24 17:21:32.008152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.900 [2024-07-24 17:21:32.008173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.900 [2024-07-24 17:21:32.008178] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.900 [2024-07-24 17:21:32.008193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:45.901 [2024-07-24 17:21:32.008199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:45.901 [2024-07-24 17:21:32.008203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1567a60) 00:20:45.901 [2024-07-24 17:21:32.008213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:45.901 [2024-07-24 17:21:32.008242] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15aacc0, cid 3, qid 0 00:20:45.901 [2024-07-24 17:21:32.008302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:45.901 [2024-07-24 17:21:32.008310] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:45.901 [2024-07-24 17:21:32.008314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:45.901 [2024-07-24 17:21:32.008318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15aacc0) on tqpair=0x1567a60 00:20:45.901 [2024-07-24 17:21:32.008327] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:45.901 00:20:45.901 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:45.901 [2024-07-24 17:21:32.050646] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:45.901 [2024-07-24 17:21:32.050705] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86208 ] 00:20:46.183 [2024-07-24 17:21:32.186834] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:46.183 [2024-07-24 17:21:32.186952] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:46.183 [2024-07-24 17:21:32.186960] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:46.183 [2024-07-24 17:21:32.186973] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:46.183 [2024-07-24 17:21:32.186982] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:46.183 [2024-07-24 17:21:32.187118] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:46.183 [2024-07-24 17:21:32.187215] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1acda60 0 00:20:46.183 [2024-07-24 17:21:32.200143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:46.183 [2024-07-24 17:21:32.200187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:46.183 [2024-07-24 17:21:32.200209] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:46.183 [2024-07-24 17:21:32.200213] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:46.183 [2024-07-24 17:21:32.200262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.200271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.200276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.183 [2024-07-24 17:21:32.200291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:46.183 [2024-07-24 17:21:32.200325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.183 [2024-07-24 17:21:32.208122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.183 [2024-07-24 17:21:32.208145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.183 [2024-07-24 17:21:32.208150] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.208156] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.183 [2024-07-24 17:21:32.208169] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:46.183 [2024-07-24 17:21:32.208177] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:46.183 [2024-07-24 17:21:32.208184] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:46.183 [2024-07-24 17:21:32.208203] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.208209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.208214] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.183 [2024-07-24 17:21:32.208224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.183 [2024-07-24 17:21:32.208255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.183 [2024-07-24 17:21:32.208331] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.183 [2024-07-24 17:21:32.208339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.183 [2024-07-24 17:21:32.208343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.208347] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.183 [2024-07-24 17:21:32.208353] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:46.183 [2024-07-24 17:21:32.208361] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:46.183 [2024-07-24 17:21:32.208370] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.183 [2024-07-24 17:21:32.208377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.184 [2024-07-24 17:21:32.208396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.184 [2024-07-24 17:21:32.208431] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.184 [2024-07-24 17:21:32.208487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.184 [2024-07-24 17:21:32.208496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.184 [2024-07-24 17:21:32.208503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.184 [2024-07-24 17:21:32.208519] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:46.184 [2024-07-24 17:21:32.208533] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:46.184 [2024-07-24 17:21:32.208542] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208547] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208551] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.184 [2024-07-24 17:21:32.208559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.184 [2024-07-24 17:21:32.208582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.184 [2024-07-24 17:21:32.208642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.184 [2024-07-24 17:21:32.208655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.184 [2024-07-24 17:21:32.208659] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.184 [2024-07-24 17:21:32.208669] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:46.184 [2024-07-24 17:21:32.208683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.184 [2024-07-24 17:21:32.208710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.184 [2024-07-24 17:21:32.208740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.184 [2024-07-24 17:21:32.208794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.184 [2024-07-24 17:21:32.208807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.184 [2024-07-24 17:21:32.208814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208820] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.184 [2024-07-24 17:21:32.208826] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:46.184 [2024-07-24 17:21:32.208832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:46.184 [2024-07-24 17:21:32.208841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:46.184 [2024-07-24 17:21:32.208950] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:46.184 [2024-07-24 17:21:32.208957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:46.184 [2024-07-24 17:21:32.208969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.208983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.184 [2024-07-24 17:21:32.208995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.184 [2024-07-24 17:21:32.209026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.184 [2024-07-24 17:21:32.209083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.184 [2024-07-24 17:21:32.209093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.184 [2024-07-24 17:21:32.209120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.209126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.184 [2024-07-24 17:21:32.209133] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:46.184 [2024-07-24 17:21:32.209146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.209151] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.209155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.184 [2024-07-24 17:21:32.209165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.184 [2024-07-24 17:21:32.209189] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.184 [2024-07-24 17:21:32.209258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.184 [2024-07-24 17:21:32.209270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.184 [2024-07-24 17:21:32.209276] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.209284] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.184 [2024-07-24 17:21:32.209292] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:46.184 [2024-07-24 17:21:32.209301] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:46.184 [2024-07-24 17:21:32.209314] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:46.184 [2024-07-24 17:21:32.209327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:46.184 [2024-07-24 17:21:32.209338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.209343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.184 [2024-07-24 17:21:32.209352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.184 [2024-07-24 17:21:32.209377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.184 [2024-07-24 17:21:32.209473] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.184 [2024-07-24 17:21:32.209497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.184 [2024-07-24 17:21:32.209504] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.184 [2024-07-24 17:21:32.209511] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=4096, cccid=0 00:20:46.185 [2024-07-24 17:21:32.209518] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b10840) on tqpair(0x1acda60): expected_datao=0, payload_size=4096 00:20:46.185 [2024-07-24 17:21:32.209527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209541] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209550] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.185 [2024-07-24 17:21:32.209572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.185 [2024-07-24 17:21:32.209576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.185 [2024-07-24 17:21:32.209590] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:46.185 [2024-07-24 17:21:32.209596] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:46.185 [2024-07-24 17:21:32.209601] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:46.185 [2024-07-24 17:21:32.209611] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:46.185 [2024-07-24 17:21:32.209617] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:46.185 [2024-07-24 17:21:32.209622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.209632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.209641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209646] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.209658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:46.185 [2024-07-24 17:21:32.209683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.185 [2024-07-24 17:21:32.209750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.185 [2024-07-24 17:21:32.209760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.185 [2024-07-24 17:21:32.209764] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.185 [2024-07-24 17:21:32.209777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.209793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.185 [2024-07-24 17:21:32.209804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209810] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.209826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.185 [2024-07-24 17:21:32.209837] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209844] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209851] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.209861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.185 [2024-07-24 17:21:32.209871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.209887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.185 [2024-07-24 17:21:32.209893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.209904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.209912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.209916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.209923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.185 [2024-07-24 17:21:32.209965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10840, cid 0, qid 0 00:20:46.185 [2024-07-24 17:21:32.209977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b109c0, cid 1, qid 0 00:20:46.185 [2024-07-24 17:21:32.209983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10b40, cid 2, qid 0 00:20:46.185 [2024-07-24 17:21:32.209991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.185 [2024-07-24 17:21:32.209998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.185 [2024-07-24 17:21:32.210081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.185 [2024-07-24 17:21:32.210091] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.185 [2024-07-24 17:21:32.210095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.210116] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.185 [2024-07-24 17:21:32.210126] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:46.185 [2024-07-24 17:21:32.210136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.210151] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.210160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:46.185 [2024-07-24 17:21:32.210168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.210173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.210177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.185 [2024-07-24 17:21:32.210185] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:46.185 [2024-07-24 17:21:32.210209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.185 [2024-07-24 17:21:32.210275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.185 [2024-07-24 17:21:32.210288] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.185 [2024-07-24 17:21:32.210292] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.185 [2024-07-24 17:21:32.210296] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.186 [2024-07-24 17:21:32.210368] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.210387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.210403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.186 [2024-07-24 17:21:32.210421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.186 [2024-07-24 17:21:32.210446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.186 [2024-07-24 17:21:32.210520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.186 [2024-07-24 17:21:32.210529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.186 [2024-07-24 17:21:32.210533] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210539] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=4096, cccid=4 00:20:46.186 [2024-07-24 17:21:32.210546] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b10e40) on tqpair(0x1acda60): expected_datao=0, payload_size=4096 00:20:46.186 [2024-07-24 17:21:32.210554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210566] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210574] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.186 [2024-07-24 17:21:32.210593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.186 [2024-07-24 17:21:32.210596] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210601] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.186 [2024-07-24 17:21:32.210617] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:46.186 [2024-07-24 17:21:32.210638] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.210652] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.210661] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.186 [2024-07-24 17:21:32.210674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.186 [2024-07-24 17:21:32.210698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.186 [2024-07-24 17:21:32.210786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.186 [2024-07-24 17:21:32.210798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.186 [2024-07-24 17:21:32.210802] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210806] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=4096, cccid=4 00:20:46.186 [2024-07-24 17:21:32.210811] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b10e40) on tqpair(0x1acda60): expected_datao=0, payload_size=4096 00:20:46.186 [2024-07-24 17:21:32.210816] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210824] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210828] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.186 [2024-07-24 17:21:32.210847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.186 [2024-07-24 17:21:32.210854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.186 [2024-07-24 17:21:32.210907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.210929] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.210944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.210949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.186 [2024-07-24 17:21:32.210957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.186 [2024-07-24 17:21:32.210983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.186 [2024-07-24 17:21:32.211054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.186 [2024-07-24 17:21:32.211068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.186 [2024-07-24 17:21:32.211075] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.211082] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=4096, cccid=4 00:20:46.186 [2024-07-24 17:21:32.211088] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b10e40) on tqpair(0x1acda60): expected_datao=0, payload_size=4096 00:20:46.186 [2024-07-24 17:21:32.211094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.211118] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.211124] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.211133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.186 [2024-07-24 17:21:32.211140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.186 [2024-07-24 17:21:32.211144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.211149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.186 [2024-07-24 17:21:32.211163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211209] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211228] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211238] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:46.186 [2024-07-24 17:21:32.211243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:46.186 [2024-07-24 17:21:32.211249] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:46.186 [2024-07-24 17:21:32.211267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.186 [2024-07-24 17:21:32.211273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.186 [2024-07-24 17:21:32.211281] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.211289] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.187 [2024-07-24 17:21:32.211338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.187 [2024-07-24 17:21:32.211351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10fc0, cid 5, qid 0 00:20:46.187 [2024-07-24 17:21:32.211420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.187 [2024-07-24 17:21:32.211436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.187 [2024-07-24 17:21:32.211441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211445] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.187 [2024-07-24 17:21:32.211452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.187 [2024-07-24 17:21:32.211459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.187 [2024-07-24 17:21:32.211465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10fc0) on tqpair=0x1acda60 00:20:46.187 [2024-07-24 17:21:32.211490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.211541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10fc0, cid 5, qid 0 00:20:46.187 [2024-07-24 17:21:32.211607] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.187 [2024-07-24 17:21:32.211620] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.187 [2024-07-24 17:21:32.211628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10fc0) on tqpair=0x1acda60 00:20:46.187 [2024-07-24 17:21:32.211653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.211691] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10fc0, cid 5, qid 0 00:20:46.187 [2024-07-24 17:21:32.211747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.187 [2024-07-24 17:21:32.211755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.187 [2024-07-24 17:21:32.211761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211768] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10fc0) on tqpair=0x1acda60 00:20:46.187 [2024-07-24 17:21:32.211785] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.211826] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10fc0, cid 5, qid 0 00:20:46.187 [2024-07-24 17:21:32.211885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.187 [2024-07-24 17:21:32.211897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.187 [2024-07-24 17:21:32.211903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211910] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10fc0) on tqpair=0x1acda60 00:20:46.187 [2024-07-24 17:21:32.211939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211947] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.211963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211968] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.211982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.211987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.211993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.212002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.212007] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1acda60) 00:20:46.187 [2024-07-24 17:21:32.212017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.187 [2024-07-24 17:21:32.212053] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10fc0, cid 5, qid 0 00:20:46.187 [2024-07-24 17:21:32.212066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10e40, cid 4, qid 0 00:20:46.187 [2024-07-24 17:21:32.212075] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b11140, cid 6, qid 0 00:20:46.187 [2024-07-24 17:21:32.212082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b112c0, cid 7, qid 0 00:20:46.187 [2024-07-24 17:21:32.216139] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.187 [2024-07-24 17:21:32.216177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.187 [2024-07-24 17:21:32.216182] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.216186] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=8192, cccid=5 00:20:46.187 [2024-07-24 17:21:32.216192] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b10fc0) on tqpair(0x1acda60): expected_datao=0, payload_size=8192 00:20:46.187 [2024-07-24 17:21:32.216197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.216205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.216210] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.216216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.187 [2024-07-24 17:21:32.216222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.187 [2024-07-24 17:21:32.216226] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.216230] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=512, cccid=4 00:20:46.187 [2024-07-24 17:21:32.216235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b10e40) on tqpair(0x1acda60): expected_datao=0, payload_size=512 00:20:46.187 [2024-07-24 17:21:32.216240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.187 [2024-07-24 17:21:32.216246] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216250] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.188 [2024-07-24 17:21:32.216262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.188 [2024-07-24 17:21:32.216266] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216270] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=512, cccid=6 00:20:46.188 [2024-07-24 17:21:32.216275] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b11140) on tqpair(0x1acda60): expected_datao=0, payload_size=512 00:20:46.188 [2024-07-24 17:21:32.216279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216286] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216290] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:46.188 [2024-07-24 17:21:32.216301] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:46.188 [2024-07-24 17:21:32.216305] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216309] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1acda60): datao=0, datal=4096, cccid=7 00:20:46.188 [2024-07-24 17:21:32.216314] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b112c0) on tqpair(0x1acda60): expected_datao=0, payload_size=4096 00:20:46.188 [2024-07-24 17:21:32.216318] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216325] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216329] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.188 [2024-07-24 17:21:32.216341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.188 [2024-07-24 17:21:32.216345] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10fc0) on tqpair=0x1acda60 00:20:46.188 [2024-07-24 17:21:32.216372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.188 [2024-07-24 17:21:32.216379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.188 [2024-07-24 17:21:32.216383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216387] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10e40) on tqpair=0x1acda60 00:20:46.188 [2024-07-24 17:21:32.216401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.188 [2024-07-24 17:21:32.216407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.188 [2024-07-24 17:21:32.216413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b11140) on tqpair=0x1acda60 00:20:46.188 [2024-07-24 17:21:32.216431] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.188 [2024-07-24 17:21:32.216441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.188 [2024-07-24 17:21:32.216448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.188 [2024-07-24 17:21:32.216454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b112c0) on tqpair=0x1acda60 00:20:46.188 ===================================================== 00:20:46.188 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:46.188 ===================================================== 00:20:46.188 Controller Capabilities/Features 00:20:46.188 ================================ 00:20:46.188 Vendor ID: 8086 00:20:46.188 Subsystem Vendor ID: 8086 00:20:46.188 Serial Number: SPDK00000000000001 00:20:46.188 Model Number: SPDK bdev Controller 00:20:46.188 Firmware Version: 24.09 00:20:46.188 Recommended Arb Burst: 6 00:20:46.188 IEEE OUI Identifier: e4 d2 5c 00:20:46.188 Multi-path I/O 00:20:46.188 May have multiple subsystem ports: Yes 00:20:46.188 May have multiple controllers: Yes 00:20:46.188 Associated with SR-IOV VF: No 00:20:46.188 Max Data Transfer Size: 131072 00:20:46.188 Max Number of Namespaces: 32 00:20:46.188 Max Number of I/O Queues: 127 00:20:46.188 NVMe Specification Version (VS): 1.3 00:20:46.188 NVMe Specification Version (Identify): 1.3 00:20:46.188 Maximum Queue Entries: 128 00:20:46.188 Contiguous Queues Required: Yes 00:20:46.188 Arbitration Mechanisms Supported 00:20:46.188 Weighted Round Robin: Not Supported 00:20:46.188 Vendor Specific: Not Supported 00:20:46.188 Reset Timeout: 15000 ms 00:20:46.188 Doorbell Stride: 4 bytes 00:20:46.188 NVM Subsystem Reset: Not Supported 00:20:46.188 Command Sets Supported 00:20:46.188 NVM Command Set: Supported 00:20:46.188 Boot Partition: Not Supported 00:20:46.188 Memory Page Size Minimum: 4096 bytes 00:20:46.188 Memory Page Size Maximum: 4096 bytes 00:20:46.188 Persistent Memory Region: Not Supported 00:20:46.188 Optional Asynchronous Events Supported 00:20:46.188 Namespace Attribute Notices: Supported 00:20:46.188 Firmware Activation Notices: Not Supported 00:20:46.188 ANA Change Notices: Not Supported 00:20:46.188 PLE Aggregate Log Change Notices: Not Supported 00:20:46.188 LBA Status Info Alert Notices: Not Supported 00:20:46.188 EGE Aggregate Log Change Notices: Not Supported 00:20:46.188 Normal NVM Subsystem Shutdown event: Not Supported 00:20:46.188 Zone Descriptor Change Notices: Not Supported 00:20:46.188 Discovery Log Change Notices: Not Supported 00:20:46.188 Controller Attributes 00:20:46.188 128-bit Host Identifier: Supported 00:20:46.188 Non-Operational Permissive Mode: Not Supported 00:20:46.188 NVM Sets: Not Supported 00:20:46.188 Read Recovery Levels: Not Supported 00:20:46.188 Endurance Groups: Not Supported 00:20:46.188 Predictable Latency Mode: Not Supported 00:20:46.188 Traffic Based Keep ALive: Not Supported 00:20:46.188 Namespace Granularity: Not Supported 00:20:46.188 SQ Associations: Not Supported 00:20:46.188 UUID List: Not Supported 00:20:46.188 Multi-Domain Subsystem: Not Supported 00:20:46.188 Fixed Capacity Management: Not Supported 00:20:46.188 Variable Capacity Management: Not Supported 00:20:46.188 Delete Endurance Group: Not Supported 00:20:46.188 Delete NVM Set: Not Supported 00:20:46.188 Extended LBA Formats Supported: Not Supported 00:20:46.188 Flexible Data Placement Supported: Not Supported 00:20:46.188 00:20:46.188 Controller Memory Buffer Support 00:20:46.188 ================================ 00:20:46.188 Supported: No 00:20:46.188 00:20:46.188 Persistent Memory Region Support 00:20:46.188 ================================ 00:20:46.188 Supported: No 00:20:46.188 00:20:46.188 Admin Command Set Attributes 00:20:46.188 ============================ 00:20:46.188 Security Send/Receive: Not Supported 00:20:46.188 Format NVM: Not Supported 00:20:46.188 Firmware Activate/Download: Not Supported 00:20:46.188 Namespace Management: Not Supported 00:20:46.188 Device Self-Test: Not Supported 00:20:46.188 Directives: Not Supported 00:20:46.188 NVMe-MI: Not Supported 00:20:46.188 Virtualization Management: Not Supported 00:20:46.188 Doorbell Buffer Config: Not Supported 00:20:46.188 Get LBA Status Capability: Not Supported 00:20:46.188 Command & Feature Lockdown Capability: Not Supported 00:20:46.188 Abort Command Limit: 4 00:20:46.188 Async Event Request Limit: 4 00:20:46.188 Number of Firmware Slots: N/A 00:20:46.188 Firmware Slot 1 Read-Only: N/A 00:20:46.188 Firmware Activation Without Reset: N/A 00:20:46.188 Multiple Update Detection Support: N/A 00:20:46.188 Firmware Update Granularity: No Information Provided 00:20:46.188 Per-Namespace SMART Log: No 00:20:46.188 Asymmetric Namespace Access Log Page: Not Supported 00:20:46.188 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:46.188 Command Effects Log Page: Supported 00:20:46.188 Get Log Page Extended Data: Supported 00:20:46.188 Telemetry Log Pages: Not Supported 00:20:46.188 Persistent Event Log Pages: Not Supported 00:20:46.188 Supported Log Pages Log Page: May Support 00:20:46.188 Commands Supported & Effects Log Page: Not Supported 00:20:46.188 Feature Identifiers & Effects Log Page:May Support 00:20:46.188 NVMe-MI Commands & Effects Log Page: May Support 00:20:46.189 Data Area 4 for Telemetry Log: Not Supported 00:20:46.189 Error Log Page Entries Supported: 128 00:20:46.189 Keep Alive: Supported 00:20:46.189 Keep Alive Granularity: 10000 ms 00:20:46.189 00:20:46.189 NVM Command Set Attributes 00:20:46.189 ========================== 00:20:46.189 Submission Queue Entry Size 00:20:46.189 Max: 64 00:20:46.189 Min: 64 00:20:46.189 Completion Queue Entry Size 00:20:46.189 Max: 16 00:20:46.189 Min: 16 00:20:46.189 Number of Namespaces: 32 00:20:46.189 Compare Command: Supported 00:20:46.189 Write Uncorrectable Command: Not Supported 00:20:46.189 Dataset Management Command: Supported 00:20:46.189 Write Zeroes Command: Supported 00:20:46.189 Set Features Save Field: Not Supported 00:20:46.189 Reservations: Supported 00:20:46.189 Timestamp: Not Supported 00:20:46.189 Copy: Supported 00:20:46.189 Volatile Write Cache: Present 00:20:46.189 Atomic Write Unit (Normal): 1 00:20:46.189 Atomic Write Unit (PFail): 1 00:20:46.189 Atomic Compare & Write Unit: 1 00:20:46.189 Fused Compare & Write: Supported 00:20:46.189 Scatter-Gather List 00:20:46.189 SGL Command Set: Supported 00:20:46.189 SGL Keyed: Supported 00:20:46.189 SGL Bit Bucket Descriptor: Not Supported 00:20:46.189 SGL Metadata Pointer: Not Supported 00:20:46.189 Oversized SGL: Not Supported 00:20:46.189 SGL Metadata Address: Not Supported 00:20:46.189 SGL Offset: Supported 00:20:46.189 Transport SGL Data Block: Not Supported 00:20:46.189 Replay Protected Memory Block: Not Supported 00:20:46.189 00:20:46.189 Firmware Slot Information 00:20:46.189 ========================= 00:20:46.189 Active slot: 1 00:20:46.189 Slot 1 Firmware Revision: 24.09 00:20:46.189 00:20:46.189 00:20:46.189 Commands Supported and Effects 00:20:46.189 ============================== 00:20:46.189 Admin Commands 00:20:46.189 -------------- 00:20:46.189 Get Log Page (02h): Supported 00:20:46.189 Identify (06h): Supported 00:20:46.189 Abort (08h): Supported 00:20:46.189 Set Features (09h): Supported 00:20:46.189 Get Features (0Ah): Supported 00:20:46.189 Asynchronous Event Request (0Ch): Supported 00:20:46.189 Keep Alive (18h): Supported 00:20:46.189 I/O Commands 00:20:46.189 ------------ 00:20:46.189 Flush (00h): Supported LBA-Change 00:20:46.189 Write (01h): Supported LBA-Change 00:20:46.189 Read (02h): Supported 00:20:46.189 Compare (05h): Supported 00:20:46.189 Write Zeroes (08h): Supported LBA-Change 00:20:46.189 Dataset Management (09h): Supported LBA-Change 00:20:46.189 Copy (19h): Supported LBA-Change 00:20:46.189 00:20:46.189 Error Log 00:20:46.189 ========= 00:20:46.189 00:20:46.189 Arbitration 00:20:46.189 =========== 00:20:46.189 Arbitration Burst: 1 00:20:46.189 00:20:46.189 Power Management 00:20:46.189 ================ 00:20:46.189 Number of Power States: 1 00:20:46.189 Current Power State: Power State #0 00:20:46.189 Power State #0: 00:20:46.189 Max Power: 0.00 W 00:20:46.189 Non-Operational State: Operational 00:20:46.189 Entry Latency: Not Reported 00:20:46.189 Exit Latency: Not Reported 00:20:46.189 Relative Read Throughput: 0 00:20:46.189 Relative Read Latency: 0 00:20:46.189 Relative Write Throughput: 0 00:20:46.189 Relative Write Latency: 0 00:20:46.189 Idle Power: Not Reported 00:20:46.189 Active Power: Not Reported 00:20:46.189 Non-Operational Permissive Mode: Not Supported 00:20:46.189 00:20:46.189 Health Information 00:20:46.189 ================== 00:20:46.189 Critical Warnings: 00:20:46.189 Available Spare Space: OK 00:20:46.189 Temperature: OK 00:20:46.189 Device Reliability: OK 00:20:46.189 Read Only: No 00:20:46.189 Volatile Memory Backup: OK 00:20:46.189 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:46.189 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:46.189 Available Spare: 0% 00:20:46.189 Available Spare Threshold: 0% 00:20:46.189 Life Percentage Used:[2024-07-24 17:21:32.216588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.216601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1acda60) 00:20:46.189 [2024-07-24 17:21:32.216616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.189 [2024-07-24 17:21:32.216652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b112c0, cid 7, qid 0 00:20:46.189 [2024-07-24 17:21:32.216729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.189 [2024-07-24 17:21:32.216743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.189 [2024-07-24 17:21:32.216751] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.216756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b112c0) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.216801] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:46.189 [2024-07-24 17:21:32.216814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10840) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.216822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.189 [2024-07-24 17:21:32.216828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b109c0) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.216833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.189 [2024-07-24 17:21:32.216839] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10b40) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.216846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.189 [2024-07-24 17:21:32.216855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.216863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.189 [2024-07-24 17:21:32.216878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.216888] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.216895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.189 [2024-07-24 17:21:32.216907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.189 [2024-07-24 17:21:32.216945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.189 [2024-07-24 17:21:32.217000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.189 [2024-07-24 17:21:32.217009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.189 [2024-07-24 17:21:32.217013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.217017] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.217027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.217034] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.217040] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.189 [2024-07-24 17:21:32.217052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.189 [2024-07-24 17:21:32.217091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.189 [2024-07-24 17:21:32.217181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.189 [2024-07-24 17:21:32.217196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.189 [2024-07-24 17:21:32.217201] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.189 [2024-07-24 17:21:32.217206] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.189 [2024-07-24 17:21:32.217211] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:46.190 [2024-07-24 17:21:32.217216] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:46.190 [2024-07-24 17:21:32.217229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.217254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.217284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.217338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.217347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.217351] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217356] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.217371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.217397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.217421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.217476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.217488] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.217496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.217517] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.217534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.217556] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.217610] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.217624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.217631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.217655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.217679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.217702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.217759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.217772] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.217779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.217804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217816] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.217824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.217847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.217901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.217914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.217921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.217945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.217959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.217967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.217990] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.218047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.218060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.218067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218074] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.218091] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.218127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.218152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.218215] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.218227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.218233] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218240] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.218258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218270] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.218278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.218301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.218360] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.218374] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.218381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.218405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218415] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.218430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.218453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.218515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.218525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.218529] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218533] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.218545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218550] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.218562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.218584] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.218646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.218658] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.218663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.218684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.218708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.218731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.190 [2024-07-24 17:21:32.218788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.190 [2024-07-24 17:21:32.218800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.190 [2024-07-24 17:21:32.218807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.190 [2024-07-24 17:21:32.218831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218837] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.190 [2024-07-24 17:21:32.218841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.190 [2024-07-24 17:21:32.218849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.190 [2024-07-24 17:21:32.218873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.218943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.218953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.218956] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.218961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.218979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.218985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.218989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.218997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219078] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219114] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219133] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219236] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219279] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219284] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219377] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219398] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219420] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219425] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219531] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219551] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219558] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219573] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219582] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219612] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219687] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219703] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219712] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219814] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219828] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219841] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.219880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.219932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.219940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.219946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.219969] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219978] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.219984] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.219996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.220019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.220071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.220078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.220084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.220090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.224181] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.224202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.224207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1acda60) 00:20:46.191 [2024-07-24 17:21:32.224216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.191 [2024-07-24 17:21:32.224245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b10cc0, cid 3, qid 0 00:20:46.191 [2024-07-24 17:21:32.224309] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:46.191 [2024-07-24 17:21:32.224317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:46.191 [2024-07-24 17:21:32.224321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:46.191 [2024-07-24 17:21:32.224325] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b10cc0) on tqpair=0x1acda60 00:20:46.191 [2024-07-24 17:21:32.224334] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:46.191 0% 00:20:46.191 Data Units Read: 0 00:20:46.191 Data Units Written: 0 00:20:46.191 Host Read Commands: 0 00:20:46.191 Host Write Commands: 0 00:20:46.191 Controller Busy Time: 0 minutes 00:20:46.191 Power Cycles: 0 00:20:46.191 Power On Hours: 0 hours 00:20:46.191 Unsafe Shutdowns: 0 00:20:46.191 Unrecoverable Media Errors: 0 00:20:46.191 Lifetime Error Log Entries: 0 00:20:46.191 Warning Temperature Time: 0 minutes 00:20:46.191 Critical Temperature Time: 0 minutes 00:20:46.191 00:20:46.191 Number of Queues 00:20:46.191 ================ 00:20:46.191 Number of I/O Submission Queues: 127 00:20:46.191 Number of I/O Completion Queues: 127 00:20:46.191 00:20:46.191 Active Namespaces 00:20:46.191 ================= 00:20:46.191 Namespace ID:1 00:20:46.191 Error Recovery Timeout: Unlimited 00:20:46.191 Command Set Identifier: NVM (00h) 00:20:46.191 Deallocate: Supported 00:20:46.191 Deallocated/Unwritten Error: Not Supported 00:20:46.191 Deallocated Read Value: Unknown 00:20:46.191 Deallocate in Write Zeroes: Not Supported 00:20:46.191 Deallocated Guard Field: 0xFFFF 00:20:46.191 Flush: Supported 00:20:46.191 Reservation: Supported 00:20:46.191 Namespace Sharing Capabilities: Multiple Controllers 00:20:46.191 Size (in LBAs): 131072 (0GiB) 00:20:46.191 Capacity (in LBAs): 131072 (0GiB) 00:20:46.191 Utilization (in LBAs): 131072 (0GiB) 00:20:46.191 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:46.191 EUI64: ABCDEF0123456789 00:20:46.191 UUID: 03f56112-630b-4b2a-9d89-6ede1c62b499 00:20:46.191 Thin Provisioning: Not Supported 00:20:46.192 Per-NS Atomic Units: Yes 00:20:46.192 Atomic Boundary Size (Normal): 0 00:20:46.192 Atomic Boundary Size (PFail): 0 00:20:46.192 Atomic Boundary Offset: 0 00:20:46.192 Maximum Single Source Range Length: 65535 00:20:46.192 Maximum Copy Length: 65535 00:20:46.192 Maximum Source Range Count: 1 00:20:46.192 NGUID/EUI64 Never Reused: No 00:20:46.192 Namespace Write Protected: No 00:20:46.192 Number of LBA Formats: 1 00:20:46.192 Current LBA Format: LBA Format #00 00:20:46.192 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:46.192 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:46.192 rmmod nvme_tcp 00:20:46.192 rmmod nvme_fabrics 00:20:46.192 rmmod nvme_keyring 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 86148 ']' 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 86148 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 86148 ']' 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 86148 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86148 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:46.192 killing process with pid 86148 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86148' 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 86148 00:20:46.192 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 86148 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:46.450 00:20:46.450 real 0m2.536s 00:20:46.450 user 0m7.132s 00:20:46.450 sys 0m0.633s 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:46.450 17:21:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:46.450 ************************************ 00:20:46.450 END TEST nvmf_identify 00:20:46.450 ************************************ 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.710 ************************************ 00:20:46.710 START TEST nvmf_perf 00:20:46.710 ************************************ 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:46.710 * Looking for test storage... 00:20:46.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:46.710 Cannot find device "nvmf_tgt_br" 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # true 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:46.710 Cannot find device "nvmf_tgt_br2" 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # true 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:46.710 Cannot find device "nvmf_tgt_br" 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # true 00:20:46.710 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:46.710 Cannot find device "nvmf_tgt_br2" 00:20:46.711 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # true 00:20:46.711 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:46.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:46.969 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:46.969 17:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:46.969 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:46.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:46.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:46.969 00:20:46.969 --- 10.0.0.2 ping statistics --- 00:20:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.970 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:46.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:46.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:20:46.970 00:20:46.970 --- 10.0.0.3 ping statistics --- 00:20:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.970 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:46.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:46.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:46.970 00:20:46.970 --- 10.0.0.1 ping statistics --- 00:20:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:46.970 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:46.970 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=86375 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 86375 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 86375 ']' 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.228 17:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:47.228 [2024-07-24 17:21:33.286168] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:20:47.228 [2024-07-24 17:21:33.286273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.228 [2024-07-24 17:21:33.426854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.487 [2024-07-24 17:21:33.532797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.487 [2024-07-24 17:21:33.532863] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.487 [2024-07-24 17:21:33.532874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.487 [2024-07-24 17:21:33.532881] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.487 [2024-07-24 17:21:33.532888] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.487 [2024-07-24 17:21:33.533041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.487 [2024-07-24 17:21:33.533366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.487 [2024-07-24 17:21:33.534058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.487 [2024-07-24 17:21:33.534002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:48.051 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:48.616 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:48.616 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:48.874 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:48.874 17:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:49.132 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:49.132 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:49.132 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:49.132 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:49.132 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:49.390 [2024-07-24 17:21:35.503625] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.390 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:49.648 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:49.648 17:21:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.906 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:49.906 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:50.163 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:50.421 [2024-07-24 17:21:36.597050] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:50.421 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:50.678 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:50.678 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:50.678 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:50.678 17:21:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:52.050 Initializing NVMe Controllers 00:20:52.050 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:52.050 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:52.050 Initialization complete. Launching workers. 00:20:52.050 ======================================================== 00:20:52.050 Latency(us) 00:20:52.050 Device Information : IOPS MiB/s Average min max 00:20:52.050 PCIE (0000:00:10.0) NSID 1 from core 0: 24037.54 93.90 1331.19 316.72 7684.67 00:20:52.050 ======================================================== 00:20:52.050 Total : 24037.54 93.90 1331.19 316.72 7684.67 00:20:52.050 00:20:52.050 17:21:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:53.423 Initializing NVMe Controllers 00:20:53.423 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.423 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.423 Initialization complete. Launching workers. 00:20:53.423 ======================================================== 00:20:53.423 Latency(us) 00:20:53.423 Device Information : IOPS MiB/s Average min max 00:20:53.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3619.80 14.14 275.92 101.84 4290.61 00:20:53.423 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.75 0.48 8144.12 6045.28 12044.79 00:20:53.423 ======================================================== 00:20:53.423 Total : 3743.55 14.62 536.03 101.84 12044.79 00:20:53.423 00:20:53.423 17:21:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.798 Initializing NVMe Controllers 00:20:54.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:54.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:54.798 Initialization complete. Launching workers. 00:20:54.798 ======================================================== 00:20:54.798 Latency(us) 00:20:54.798 Device Information : IOPS MiB/s Average min max 00:20:54.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9111.19 35.59 3512.67 735.47 8085.85 00:20:54.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2701.53 10.55 11940.05 7118.86 22825.40 00:20:54.798 ======================================================== 00:20:54.798 Total : 11812.72 46.14 5439.99 735.47 22825.40 00:20:54.798 00:20:54.798 17:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:54.798 17:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:57.330 Initializing NVMe Controllers 00:20:57.330 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.330 Controller IO queue size 128, less than required. 00:20:57.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.330 Controller IO queue size 128, less than required. 00:20:57.330 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.330 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:57.330 Initialization complete. Launching workers. 00:20:57.330 ======================================================== 00:20:57.330 Latency(us) 00:20:57.330 Device Information : IOPS MiB/s Average min max 00:20:57.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1244.95 311.24 104888.16 74224.57 190014.58 00:20:57.330 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.47 149.37 225776.94 78840.10 360752.48 00:20:57.330 ======================================================== 00:20:57.330 Total : 1842.42 460.60 144090.90 74224.57 360752.48 00:20:57.330 00:20:57.330 17:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:57.588 Initializing NVMe Controllers 00:20:57.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.588 Controller IO queue size 128, less than required. 00:20:57.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.588 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:57.588 Controller IO queue size 128, less than required. 00:20:57.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.588 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:57.588 WARNING: Some requested NVMe devices were skipped 00:20:57.588 No valid NVMe controllers or AIO or URING devices found 00:20:57.588 17:21:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:00.115 Initializing NVMe Controllers 00:21:00.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:00.115 Controller IO queue size 128, less than required. 00:21:00.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.115 Controller IO queue size 128, less than required. 00:21:00.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:00.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:00.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:00.115 Initialization complete. Launching workers. 00:21:00.115 00:21:00.115 ==================== 00:21:00.115 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:00.115 TCP transport: 00:21:00.115 polls: 6449 00:21:00.115 idle_polls: 3669 00:21:00.115 sock_completions: 2780 00:21:00.115 nvme_completions: 5437 00:21:00.115 submitted_requests: 8114 00:21:00.115 queued_requests: 1 00:21:00.115 00:21:00.115 ==================== 00:21:00.115 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:00.115 TCP transport: 00:21:00.115 polls: 9639 00:21:00.115 idle_polls: 6752 00:21:00.115 sock_completions: 2887 00:21:00.115 nvme_completions: 6007 00:21:00.115 submitted_requests: 9070 00:21:00.115 queued_requests: 1 00:21:00.115 ======================================================== 00:21:00.115 Latency(us) 00:21:00.115 Device Information : IOPS MiB/s Average min max 00:21:00.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1358.68 339.67 97280.93 72599.22 150289.15 00:21:00.115 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1501.15 375.29 85737.61 46103.88 129365.25 00:21:00.115 ======================================================== 00:21:00.115 Total : 2859.83 714.96 91221.75 46103.88 150289.15 00:21:00.115 00:21:00.115 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:00.115 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.374 rmmod nvme_tcp 00:21:00.374 rmmod nvme_fabrics 00:21:00.374 rmmod nvme_keyring 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 86375 ']' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 86375 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 86375 ']' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 86375 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86375 00:21:00.374 killing process with pid 86375 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86375' 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 86375 00:21:00.374 17:21:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 86375 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:00.967 00:21:00.967 real 0m14.409s 00:21:00.967 user 0m52.905s 00:21:00.967 sys 0m3.644s 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:00.967 ************************************ 00:21:00.967 END TEST nvmf_perf 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:00.967 ************************************ 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:00.967 17:21:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.247 ************************************ 00:21:01.247 START TEST nvmf_fio_host 00:21:01.247 ************************************ 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:01.247 * Looking for test storage... 00:21:01.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:01.247 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:01.248 Cannot find device "nvmf_tgt_br" 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:01.248 Cannot find device "nvmf_tgt_br2" 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:01.248 Cannot find device "nvmf_tgt_br" 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:01.248 Cannot find device "nvmf_tgt_br2" 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:01.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:01.248 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:01.248 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:01.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:01.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:21:01.506 00:21:01.506 --- 10.0.0.2 ping statistics --- 00:21:01.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.506 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:01.506 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:01.506 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:21:01.506 00:21:01.506 --- 10.0.0.3 ping statistics --- 00:21:01.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.506 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:01.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:01.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:01.506 00:21:01.506 --- 10.0.0.1 ping statistics --- 00:21:01.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:01.506 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=86853 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 86853 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 86853 ']' 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.506 17:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.506 [2024-07-24 17:21:47.732468] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:01.506 [2024-07-24 17:21:47.732553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.765 [2024-07-24 17:21:47.875700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.023 [2024-07-24 17:21:48.002237] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.023 [2024-07-24 17:21:48.002300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.023 [2024-07-24 17:21:48.002313] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.023 [2024-07-24 17:21:48.002323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.023 [2024-07-24 17:21:48.002333] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.023 [2024-07-24 17:21:48.002447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.023 [2024-07-24 17:21:48.003121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.023 [2024-07-24 17:21:48.003313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.023 [2024-07-24 17:21:48.003334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.589 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:02.590 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:21:02.590 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:02.847 [2024-07-24 17:21:48.906417] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.847 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:02.847 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.847 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.847 17:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:03.106 Malloc1 00:21:03.106 17:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.364 17:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:03.623 17:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.881 [2024-07-24 17:21:49.915487] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.881 17:21:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:04.140 17:21:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.140 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:04.140 fio-3.35 00:21:04.140 Starting 1 thread 00:21:06.697 00:21:06.697 test: (groupid=0, jobs=1): err= 0: pid=86980: Wed Jul 24 17:21:52 2024 00:21:06.697 read: IOPS=9193, BW=35.9MiB/s (37.7MB/s)(72.0MiB/2006msec) 00:21:06.697 slat (nsec): min=1882, max=337916, avg=2459.41, stdev=3345.74 00:21:06.697 clat (usec): min=3248, max=12885, avg=7268.32, stdev=515.08 00:21:06.697 lat (usec): min=3294, max=12887, avg=7270.78, stdev=514.94 00:21:06.697 clat percentiles (usec): 00:21:06.697 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6849], 00:21:06.697 | 30.00th=[ 7046], 40.00th=[ 7177], 50.00th=[ 7242], 60.00th=[ 7373], 00:21:06.697 | 70.00th=[ 7504], 80.00th=[ 7635], 90.00th=[ 7832], 95.00th=[ 8029], 00:21:06.697 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11076], 99.95th=[11994], 00:21:06.697 | 99.99th=[12911] 00:21:06.697 bw ( KiB/s): min=35752, max=37696, per=99.92%, avg=36746.00, stdev=794.07, samples=4 00:21:06.697 iops : min= 8938, max= 9424, avg=9186.50, stdev=198.52, samples=4 00:21:06.697 write: IOPS=9198, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:21:06.697 slat (nsec): min=1965, max=287828, avg=2568.40, stdev=2642.22 00:21:06.697 clat (usec): min=2466, max=12364, avg=6605.79, stdev=465.61 00:21:06.697 lat (usec): min=2479, max=12367, avg=6608.36, stdev=465.57 00:21:06.697 clat percentiles (usec): 00:21:06.697 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6128], 20.00th=[ 6259], 00:21:06.697 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6718], 00:21:06.697 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7242], 00:21:06.697 | 99.00th=[ 7635], 99.50th=[ 7767], 99.90th=[10290], 99.95th=[11207], 00:21:06.697 | 99.99th=[12387] 00:21:06.697 bw ( KiB/s): min=36552, max=36888, per=99.99%, avg=36792.00, stdev=160.40, samples=4 00:21:06.697 iops : min= 9138, max= 9222, avg=9198.00, stdev=40.10, samples=4 00:21:06.697 lat (msec) : 4=0.07%, 10=99.78%, 20=0.15% 00:21:06.697 cpu : usr=67.68%, sys=23.69%, ctx=18, majf=0, minf=7 00:21:06.697 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:06.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:06.697 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:06.697 issued rwts: total=18443,18453,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:06.697 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:06.697 00:21:06.697 Run status group 0 (all jobs): 00:21:06.697 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.0MiB (75.5MB), run=2006-2006msec 00:21:06.697 WRITE: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:06.697 17:21:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:06.697 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:06.697 fio-3.35 00:21:06.697 Starting 1 thread 00:21:09.231 00:21:09.231 test: (groupid=0, jobs=1): err= 0: pid=87027: Wed Jul 24 17:21:55 2024 00:21:09.231 read: IOPS=8140, BW=127MiB/s (133MB/s)(255MiB/2006msec) 00:21:09.231 slat (usec): min=2, max=115, avg= 3.79, stdev= 2.25 00:21:09.231 clat (usec): min=2226, max=16791, avg=9260.78, stdev=2242.60 00:21:09.231 lat (usec): min=2230, max=16802, avg=9264.57, stdev=2242.67 00:21:09.231 clat percentiles (usec): 00:21:09.231 | 1.00th=[ 4948], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7111], 00:21:09.231 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9896], 00:21:09.231 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11863], 95.00th=[13042], 00:21:09.231 | 99.00th=[14877], 99.50th=[15533], 99.90th=[16188], 99.95th=[16581], 00:21:09.231 | 99.99th=[16712] 00:21:09.231 bw ( KiB/s): min=60032, max=77344, per=51.68%, avg=67312.00, stdev=7678.87, samples=4 00:21:09.231 iops : min= 3752, max= 4834, avg=4207.00, stdev=479.93, samples=4 00:21:09.231 write: IOPS=4880, BW=76.3MiB/s (80.0MB/s)(138MiB/1809msec); 0 zone resets 00:21:09.231 slat (usec): min=32, max=360, avg=38.74, stdev= 8.83 00:21:09.231 clat (usec): min=4473, max=17697, avg=11296.28, stdev=1899.98 00:21:09.231 lat (usec): min=4506, max=17734, avg=11335.01, stdev=1901.03 00:21:09.231 clat percentiles (usec): 00:21:09.231 | 1.00th=[ 7635], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9634], 00:21:09.231 | 30.00th=[10159], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:21:09.231 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13829], 95.00th=[14746], 00:21:09.231 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17433], 99.95th=[17695], 00:21:09.231 | 99.99th=[17695] 00:21:09.231 bw ( KiB/s): min=62016, max=80000, per=89.73%, avg=70072.00, stdev=7944.10, samples=4 00:21:09.231 iops : min= 3876, max= 5000, avg=4379.50, stdev=496.51, samples=4 00:21:09.231 lat (msec) : 4=0.12%, 10=49.15%, 20=50.73% 00:21:09.231 cpu : usr=72.27%, sys=18.20%, ctx=13, majf=0, minf=22 00:21:09.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:21:09.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:09.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:09.231 issued rwts: total=16330,8829,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:09.231 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:09.231 00:21:09.231 Run status group 0 (all jobs): 00:21:09.231 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=255MiB (268MB), run=2006-2006msec 00:21:09.232 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=138MiB (145MB), run=1809-1809msec 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.232 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.232 rmmod nvme_tcp 00:21:09.232 rmmod nvme_fabrics 00:21:09.490 rmmod nvme_keyring 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 86853 ']' 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 86853 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 86853 ']' 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 86853 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86853 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:09.490 killing process with pid 86853 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86853' 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 86853 00:21:09.490 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 86853 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:09.749 00:21:09.749 real 0m8.605s 00:21:09.749 user 0m35.055s 00:21:09.749 sys 0m2.231s 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:09.749 ************************************ 00:21:09.749 END TEST nvmf_fio_host 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.749 ************************************ 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.749 ************************************ 00:21:09.749 START TEST nvmf_failover 00:21:09.749 ************************************ 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:09.749 * Looking for test storage... 00:21:09.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.749 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:09.750 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:10.009 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:10.009 Cannot find device "nvmf_tgt_br" 00:21:10.009 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # true 00:21:10.009 17:21:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:10.009 Cannot find device "nvmf_tgt_br2" 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # true 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:10.009 Cannot find device "nvmf_tgt_br" 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # true 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:10.009 Cannot find device "nvmf_tgt_br2" 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # true 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:10.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:10.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:10.009 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:10.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:21:10.268 00:21:10.268 --- 10.0.0.2 ping statistics --- 00:21:10.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.268 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:10.268 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:10.268 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:21:10.268 00:21:10.268 --- 10.0.0.3 ping statistics --- 00:21:10.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.268 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:10.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:10.268 00:21:10.268 --- 10.0.0.1 ping statistics --- 00:21:10.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.268 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=87243 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 87243 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87243 ']' 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.268 17:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:10.268 [2024-07-24 17:21:56.333783] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:10.268 [2024-07-24 17:21:56.333892] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.268 [2024-07-24 17:21:56.468355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:10.526 [2024-07-24 17:21:56.577767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.526 [2024-07-24 17:21:56.578050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.526 [2024-07-24 17:21:56.578273] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.526 [2024-07-24 17:21:56.578401] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.526 [2024-07-24 17:21:56.578415] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.526 [2024-07-24 17:21:56.578519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.526 [2024-07-24 17:21:56.579063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.526 [2024-07-24 17:21:56.579072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:11.460 [2024-07-24 17:21:57.587652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.460 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:11.719 Malloc0 00:21:11.719 17:21:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.978 17:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.236 17:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.496 [2024-07-24 17:21:58.573616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.496 17:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:12.755 [2024-07-24 17:21:58.849875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:12.755 17:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:13.014 [2024-07-24 17:21:59.082070] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87355 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87355 /var/tmp/bdevperf.sock 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87355 ']' 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.014 17:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:13.949 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.949 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:13.949 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.207 NVMe0n1 00:21:14.207 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:14.466 00:21:14.724 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87402 00:21:14.724 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:14.724 17:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:15.658 17:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:15.934 [2024-07-24 17:22:01.980662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.934 [2024-07-24 17:22:01.980796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.980974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.981000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.981009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 [2024-07-24 17:22:01.981017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d60e50 is same with the state(5) to be set 00:21:15.935 17:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:19.236 17:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:19.237 00:21:19.237 17:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:19.495 17:22:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:22.777 17:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:22.777 [2024-07-24 17:22:08.852561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.777 17:22:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:23.711 17:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:23.968 [2024-07-24 17:22:10.125381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1af30 is same with the state(5) to be set 00:21:23.968 [2024-07-24 17:22:10.125449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1af30 is same with the state(5) to be set 00:21:23.968 [2024-07-24 17:22:10.125462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1af30 is same with the state(5) to be set 00:21:23.968 [2024-07-24 17:22:10.125472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1af30 is same with the state(5) to be set 00:21:23.968 [2024-07-24 17:22:10.125483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1af30 is same with the state(5) to be set 00:21:23.968 [2024-07-24 17:22:10.125492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1af30 is same with the state(5) to be set 00:21:23.968 17:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87402 00:21:30.532 0 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87355 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87355 ']' 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87355 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87355 00:21:30.532 killing process with pid 87355 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87355' 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87355 00:21:30.532 17:22:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87355 00:21:30.532 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:30.532 [2024-07-24 17:21:59.149888] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:30.532 [2024-07-24 17:21:59.150075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87355 ] 00:21:30.532 [2024-07-24 17:21:59.282623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.532 [2024-07-24 17:21:59.388438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.532 Running I/O for 15 seconds... 00:21:30.532 [2024-07-24 17:22:01.983531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.532 [2024-07-24 17:22:01.983830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.983939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.984021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.984095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.984215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.984297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.984387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.984465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.984574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.984646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.984725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.984797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.984875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.984951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.985031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.985103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.985224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.985295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.985378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.985459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.985576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.985681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.985761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.985825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.985898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.985961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.986043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.986131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.986258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.986337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.986417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.986521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.986596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.986668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.986751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.986825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.532 [2024-07-24 17:22:01.986906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.532 [2024-07-24 17:22:01.987021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.987099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.987197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.987298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.987379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.987476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.987541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.987633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.987704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.987797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.987861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.987939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.988982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.988997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.533 [2024-07-24 17:22:01.989338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.533 [2024-07-24 17:22:01.989354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.989973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.989987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.534 [2024-07-24 17:22:01.990528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.534 [2024-07-24 17:22:01.990542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.535 [2024-07-24 17:22:01.990571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.535 [2024-07-24 17:22:01.990599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.990976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.990991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.535 [2024-07-24 17:22:01.991657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.535 [2024-07-24 17:22:01.991716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.535 [2024-07-24 17:22:01.991727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87032 len:8 PRP1 0x0 PRP2 0x0 00:21:30.535 [2024-07-24 17:22:01.991741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991797] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xaa48a0 was disconnected and freed. reset controller. 00:21:30.535 [2024-07-24 17:22:01.991820] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:30.535 [2024-07-24 17:22:01.991876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.535 [2024-07-24 17:22:01.991896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.535 [2024-07-24 17:22:01.991910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.536 [2024-07-24 17:22:01.991924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:01.991938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.536 [2024-07-24 17:22:01.991952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:01.991966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.536 [2024-07-24 17:22:01.991979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:01.991992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.536 [2024-07-24 17:22:01.992044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53e30 (9): Bad file descriptor 00:21:30.536 [2024-07-24 17:22:01.995761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.536 [2024-07-24 17:22:02.030407] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.536 [2024-07-24 17:22:05.575404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.536 [2024-07-24 17:22:05.575961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.575979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.575996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.536 [2024-07-24 17:22:05.576818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.536 [2024-07-24 17:22:05.576836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.576852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.576871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.576888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.576905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.576922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.576940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.576956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.576989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.577982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.577998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.578014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.578030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.578045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.578069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.537 [2024-07-24 17:22:05.578085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.537 [2024-07-24 17:22:05.578102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.578979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.578997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.538 [2024-07-24 17:22:05.579014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99064 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99072 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579231] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99080 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99088 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579508] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.538 [2024-07-24 17:22:05.579547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.538 [2024-07-24 17:22:05.579559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.538 [2024-07-24 17:22:05.579591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 PRP1 0x0 PRP2 0x0 00:21:30.538 [2024-07-24 17:22:05.579607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.579655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99128 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.579671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.579711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.579726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.579773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.579788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579815] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.579827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.579843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.579882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99160 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.579897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.579936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.579950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.579981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.579993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580308] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580681] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.539 [2024-07-24 17:22:05.580928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:21:30.539 [2024-07-24 17:22:05.580943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.539 [2024-07-24 17:22:05.580958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.539 [2024-07-24 17:22:05.580970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.580997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581320] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.540 [2024-07-24 17:22:05.581385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.540 [2024-07-24 17:22:05.581396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:21:30.540 [2024-07-24 17:22:05.581411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581484] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xac9d90 was disconnected and freed. reset controller. 00:21:30.540 [2024-07-24 17:22:05.581506] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:30.540 [2024-07-24 17:22:05.581575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.540 [2024-07-24 17:22:05.581599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.540 [2024-07-24 17:22:05.581645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.540 [2024-07-24 17:22:05.581676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.540 [2024-07-24 17:22:05.581706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:05.581721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.540 [2024-07-24 17:22:05.581787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53e30 (9): Bad file descriptor 00:21:30.540 [2024-07-24 17:22:05.585185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.540 [2024-07-24 17:22:05.630023] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.540 [2024-07-24 17:22:10.125770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.125845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.125877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.125896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.125933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.125951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.125984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.540 [2024-07-24 17:22:10.126327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.540 [2024-07-24 17:22:10.126361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.540 [2024-07-24 17:22:10.126394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.540 [2024-07-24 17:22:10.126445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.540 [2024-07-24 17:22:10.126477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.540 [2024-07-24 17:22:10.126514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.540 [2024-07-24 17:22:10.126551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.540 [2024-07-24 17:22:10.126569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.126977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.126994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.127030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.127065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.541 [2024-07-24 17:22:10.127102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.541 [2024-07-24 17:22:10.127816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.541 [2024-07-24 17:22:10.127832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.127851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.127867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.127885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.127902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.127920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.127936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.127953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.127969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.127987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.542 [2024-07-24 17:22:10.128505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.128965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.128983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.129017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.129060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.129094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.129153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.129187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.542 [2024-07-24 17:22:10.129220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.542 [2024-07-24 17:22:10.129237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.129942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.129976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:30.543 [2024-07-24 17:22:10.130450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.543 [2024-07-24 17:22:10.130497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.543 [2024-07-24 17:22:10.130535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.543 [2024-07-24 17:22:10.130569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.543 [2024-07-24 17:22:10.130622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.543 [2024-07-24 17:22:10.130649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.543 [2024-07-24 17:22:10.130667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.130686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.544 [2024-07-24 17:22:10.130702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.130746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:30.544 [2024-07-24 17:22:10.130763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:30.544 [2024-07-24 17:22:10.130777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50264 len:8 PRP1 0x0 PRP2 0x0 00:21:30.544 [2024-07-24 17:22:10.130792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.130869] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xae0d80 was disconnected and freed. reset controller. 00:21:30.544 [2024-07-24 17:22:10.130891] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:30.544 [2024-07-24 17:22:10.131005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.544 [2024-07-24 17:22:10.131030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.131048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.544 [2024-07-24 17:22:10.131072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.131090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.544 [2024-07-24 17:22:10.131107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.131143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:30.544 [2024-07-24 17:22:10.131161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:30.544 [2024-07-24 17:22:10.131177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:30.544 [2024-07-24 17:22:10.134667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:30.544 [2024-07-24 17:22:10.134714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa53e30 (9): Bad file descriptor 00:21:30.544 [2024-07-24 17:22:10.171262] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:30.544 00:21:30.544 Latency(us) 00:21:30.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.544 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:30.544 Verification LBA range: start 0x0 length 0x4000 00:21:30.544 NVMe0n1 : 15.00 9318.64 36.40 224.27 0.00 13382.19 670.25 18111.77 00:21:30.544 =================================================================================================================== 00:21:30.544 Total : 9318.64 36.40 224.27 0.00 13382.19 670.25 18111.77 00:21:30.544 Received shutdown signal, test time was about 15.000000 seconds 00:21:30.544 00:21:30.544 Latency(us) 00:21:30.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.544 =================================================================================================================== 00:21:30.544 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:30.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87605 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87605 /var/tmp/bdevperf.sock 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 87605 ']' 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.544 17:22:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:31.111 17:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.111 17:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:31.111 17:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.369 [2024-07-24 17:22:17.493853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.369 17:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:31.628 [2024-07-24 17:22:17.714018] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:31.628 17:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.886 NVMe0n1 00:21:31.886 17:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.145 00:21:32.145 17:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.403 00:21:32.403 17:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:32.403 17:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.971 17:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.971 17:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:36.253 17:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.253 17:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:36.253 17:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.253 17:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=87743 00:21:36.253 17:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 87743 00:21:37.630 0 00:21:37.630 17:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:37.630 [2024-07-24 17:22:16.270105] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:37.630 [2024-07-24 17:22:16.270876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87605 ] 00:21:37.630 [2024-07-24 17:22:16.408605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.630 [2024-07-24 17:22:16.562011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.630 [2024-07-24 17:22:19.159820] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:37.630 [2024-07-24 17:22:19.160055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-24 17:22:19.160088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-24 17:22:19.160165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-24 17:22:19.160185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-24 17:22:19.160205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-24 17:22:19.160222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-24 17:22:19.160240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.630 [2024-07-24 17:22:19.160256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.630 [2024-07-24 17:22:19.160273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.630 [2024-07-24 17:22:19.160332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.630 [2024-07-24 17:22:19.160375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9abe30 (9): Bad file descriptor 00:21:37.630 [2024-07-24 17:22:19.164493] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.630 Running I/O for 1 seconds... 00:21:37.630 00:21:37.630 Latency(us) 00:21:37.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.630 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:37.630 Verification LBA range: start 0x0 length 0x4000 00:21:37.630 NVMe0n1 : 1.01 9479.42 37.03 0.00 0.00 13418.72 2025.66 15490.33 00:21:37.630 =================================================================================================================== 00:21:37.630 Total : 9479.42 37.03 0.00 0.00 13418.72 2025.66 15490.33 00:21:37.630 17:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.630 17:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:37.630 17:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.888 17:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:37.888 17:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.147 17:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.405 17:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87605 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87605 ']' 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87605 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87605 00:21:41.688 killing process with pid 87605 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87605' 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87605 00:21:41.688 17:22:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87605 00:21:41.947 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:42.220 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.220 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:42.220 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:42.220 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:42.220 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.220 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.506 rmmod nvme_tcp 00:21:42.506 rmmod nvme_fabrics 00:21:42.506 rmmod nvme_keyring 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:42.506 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 87243 ']' 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 87243 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 87243 ']' 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 87243 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87243 00:21:42.507 killing process with pid 87243 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87243' 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 87243 00:21:42.507 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 87243 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:42.765 00:21:42.765 real 0m32.959s 00:21:42.765 user 2m7.633s 00:21:42.765 sys 0m5.205s 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:42.765 ************************************ 00:21:42.765 END TEST nvmf_failover 00:21:42.765 ************************************ 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:42.765 ************************************ 00:21:42.765 START TEST nvmf_host_discovery 00:21:42.765 ************************************ 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:42.765 * Looking for test storage... 00:21:42.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:42.765 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:42.766 17:22:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:43.024 Cannot find device "nvmf_tgt_br" 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:43.024 Cannot find device "nvmf_tgt_br2" 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:43.024 Cannot find device "nvmf_tgt_br" 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:43.024 Cannot find device "nvmf_tgt_br2" 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:43.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:43.024 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:43.024 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:43.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:43.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:21:43.283 00:21:43.283 --- 10.0.0.2 ping statistics --- 00:21:43.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.283 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:43.283 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:43.283 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:21:43.283 00:21:43.283 --- 10.0.0.3 ping statistics --- 00:21:43.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.283 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:43.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:43.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:21:43.283 00:21:43.283 --- 10.0.0.1 ping statistics --- 00:21:43.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:43.283 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=88047 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 88047 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 88047 ']' 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:43.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.283 17:22:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.283 [2024-07-24 17:22:29.396396] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:43.283 [2024-07-24 17:22:29.396555] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:43.541 [2024-07-24 17:22:29.538807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.542 [2024-07-24 17:22:29.629126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:43.542 [2024-07-24 17:22:29.629223] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:43.542 [2024-07-24 17:22:29.629234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:43.542 [2024-07-24 17:22:29.629252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:43.542 [2024-07-24 17:22:29.629258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:43.542 [2024-07-24 17:22:29.629320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.108 [2024-07-24 17:22:30.314326] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.108 [2024-07-24 17:22:30.322493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.108 null0 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.108 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.108 null1 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88097 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88097 /tmp/host.sock 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 88097 ']' 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.367 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.367 17:22:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.367 [2024-07-24 17:22:30.402763] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:44.367 [2024-07-24 17:22:30.402849] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88097 ] 00:21:44.367 [2024-07-24 17:22:30.539633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.625 [2024-07-24 17:22:30.698517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.192 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.450 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 [2024-07-24 17:22:31.742835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.709 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:45.967 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:21:45.967 17:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:46.225 [2024-07-24 17:22:32.397698] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:46.225 [2024-07-24 17:22:32.397761] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:46.225 [2024-07-24 17:22:32.397797] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.483 [2024-07-24 17:22:32.483839] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:46.483 [2024-07-24 17:22:32.541468] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:46.483 [2024-07-24 17:22:32.541515] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.050 17:22:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.050 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.051 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.309 [2024-07-24 17:22:33.347872] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:47.309 [2024-07-24 17:22:33.348714] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:47.309 [2024-07-24 17:22:33.348758] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.309 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.310 [2024-07-24 17:22:33.434772] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.310 [2024-07-24 17:22:33.501264] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:47.310 [2024-07-24 17:22:33.501295] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:47.310 [2024-07-24 17:22:33.501303] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:47.310 17:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.685 [2024-07-24 17:22:34.645412] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:48.685 [2024-07-24 17:22:34.645473] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.685 [2024-07-24 17:22:34.649512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.685 [2024-07-24 17:22:34.649553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.685 [2024-07-24 17:22:34.649573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.685 [2024-07-24 17:22:34.649582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.685 [2024-07-24 17:22:34.649593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.685 [2024-07-24 17:22:34.649603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.685 [2024-07-24 17:22:34.649614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:48.685 [2024-07-24 17:22:34.649624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:48.685 [2024-07-24 17:22:34.649633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.685 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.685 [2024-07-24 17:22:34.659429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.686 [2024-07-24 17:22:34.669489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 [2024-07-24 17:22:34.669730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.669771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.669789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.669815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.669864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.669903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.669917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.669936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 [2024-07-24 17:22:34.679587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 [2024-07-24 17:22:34.679696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.679720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.679732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.679750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.679766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.679777] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.679786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.679801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 [2024-07-24 17:22:34.689660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 [2024-07-24 17:22:34.689765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.689788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.689800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.689818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.689847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.689860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.689869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.689885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 [2024-07-24 17:22:34.699725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 [2024-07-24 17:22:34.699818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.699840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.699853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.699871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.699887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.699898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.699908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.699924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.686 [2024-07-24 17:22:34.709782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.686 [2024-07-24 17:22:34.709902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.709926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.709938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.709982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.710017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.710027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.710037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.710052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 [2024-07-24 17:22:34.719843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 [2024-07-24 17:22:34.720023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.720049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.720063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.720083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.720101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.720146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.720161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.720178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 [2024-07-24 17:22:34.729979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:48.686 [2024-07-24 17:22:34.730191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.686 [2024-07-24 17:22:34.730219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20ccc50 with addr=10.0.0.2, port=4420 00:21:48.686 [2024-07-24 17:22:34.730236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20ccc50 is same with the state(5) to be set 00:21:48.686 [2024-07-24 17:22:34.730260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ccc50 (9): Bad file descriptor 00:21:48.686 [2024-07-24 17:22:34.730279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:48.686 [2024-07-24 17:22:34.730290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:48.686 [2024-07-24 17:22:34.730301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:48.686 [2024-07-24 17:22:34.730319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.686 [2024-07-24 17:22:34.731851] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:48.686 [2024-07-24 17:22:34.731923] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:48.686 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.687 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.945 17:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.945 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.946 17:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.878 [2024-07-24 17:22:36.067577] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:49.878 [2024-07-24 17:22:36.067603] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:49.878 [2024-07-24 17:22:36.067637] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:50.137 [2024-07-24 17:22:36.153722] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:50.137 [2024-07-24 17:22:36.214078] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:50.137 [2024-07-24 17:22:36.214152] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.137 2024/07/24 17:22:36 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:50.137 request: 00:21:50.137 { 00:21:50.137 "method": "bdev_nvme_start_discovery", 00:21:50.137 "params": { 00:21:50.137 "name": "nvme", 00:21:50.137 "trtype": "tcp", 00:21:50.137 "traddr": "10.0.0.2", 00:21:50.137 "adrfam": "ipv4", 00:21:50.137 "trsvcid": "8009", 00:21:50.137 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:50.137 "wait_for_attach": true 00:21:50.137 } 00:21:50.137 } 00:21:50.137 Got JSON-RPC error response 00:21:50.137 GoRPCClient: error on JSON-RPC call 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:50.137 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.138 2024/07/24 17:22:36 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:50.138 request: 00:21:50.138 { 00:21:50.138 "method": "bdev_nvme_start_discovery", 00:21:50.138 "params": { 00:21:50.138 "name": "nvme_second", 00:21:50.138 "trtype": "tcp", 00:21:50.138 "traddr": "10.0.0.2", 00:21:50.138 "adrfam": "ipv4", 00:21:50.138 "trsvcid": "8009", 00:21:50.138 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:50.138 "wait_for_attach": true 00:21:50.138 } 00:21:50.138 } 00:21:50.138 Got JSON-RPC error response 00:21:50.138 GoRPCClient: error on JSON-RPC call 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.138 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.396 17:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.329 [2024-07-24 17:22:37.478998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.329 [2024-07-24 17:22:37.479052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c67b0 with addr=10.0.0.2, port=8010 00:21:51.329 [2024-07-24 17:22:37.479078] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:51.329 [2024-07-24 17:22:37.479089] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:51.329 [2024-07-24 17:22:37.479110] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:52.262 [2024-07-24 17:22:38.478981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.262 [2024-07-24 17:22:38.479046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20c67b0 with addr=10.0.0.2, port=8010 00:21:52.262 [2024-07-24 17:22:38.479073] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:52.262 [2024-07-24 17:22:38.479086] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:52.262 [2024-07-24 17:22:38.479108] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:53.633 [2024-07-24 17:22:39.478814] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:53.633 2024/07/24 17:22:39 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:53.633 request: 00:21:53.633 { 00:21:53.633 "method": "bdev_nvme_start_discovery", 00:21:53.633 "params": { 00:21:53.633 "name": "nvme_second", 00:21:53.633 "trtype": "tcp", 00:21:53.633 "traddr": "10.0.0.2", 00:21:53.633 "adrfam": "ipv4", 00:21:53.633 "trsvcid": "8010", 00:21:53.633 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:53.633 "wait_for_attach": false, 00:21:53.633 "attach_timeout_ms": 3000 00:21:53.633 } 00:21:53.633 } 00:21:53.633 Got JSON-RPC error response 00:21:53.633 GoRPCClient: error on JSON-RPC call 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88097 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.633 rmmod nvme_tcp 00:21:53.633 rmmod nvme_fabrics 00:21:53.633 rmmod nvme_keyring 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 88047 ']' 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 88047 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 88047 ']' 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 88047 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88047 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:53.633 killing process with pid 88047 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88047' 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 88047 00:21:53.633 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 88047 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:53.892 ************************************ 00:21:53.892 END TEST nvmf_host_discovery 00:21:53.892 ************************************ 00:21:53.892 00:21:53.892 real 0m11.077s 00:21:53.892 user 0m21.700s 00:21:53.892 sys 0m1.795s 00:21:53.892 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:53.893 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.893 17:22:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:53.893 17:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:53.893 17:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:53.893 17:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:53.893 ************************************ 00:21:53.893 START TEST nvmf_host_multipath_status 00:21:53.893 ************************************ 00:21:53.893 17:22:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:53.893 * Looking for test storage... 00:21:53.893 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:53.893 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:53.894 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:53.894 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:53.894 Cannot find device "nvmf_tgt_br" 00:21:53.894 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:21:53.894 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.894 Cannot find device "nvmf_tgt_br2" 00:21:53.894 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:21:53.894 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:54.152 Cannot find device "nvmf_tgt_br" 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:54.152 Cannot find device "nvmf_tgt_br2" 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.152 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.152 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:54.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:21:54.411 00:21:54.411 --- 10.0.0.2 ping statistics --- 00:21:54.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.411 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:54.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:21:54.411 00:21:54.411 --- 10.0.0.3 ping statistics --- 00:21:54.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.411 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:21:54.411 00:21:54.411 --- 10.0.0.1 ping statistics --- 00:21:54.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.411 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=88584 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 88584 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88584 ']' 00:21:54.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:54.411 17:22:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:54.411 [2024-07-24 17:22:40.501008] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:21:54.411 [2024-07-24 17:22:40.501142] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.411 [2024-07-24 17:22:40.635027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:54.669 [2024-07-24 17:22:40.726922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.669 [2024-07-24 17:22:40.727019] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.669 [2024-07-24 17:22:40.727048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.669 [2024-07-24 17:22:40.727057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.669 [2024-07-24 17:22:40.727064] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.669 [2024-07-24 17:22:40.727191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.669 [2024-07-24 17:22:40.727474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.235 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:55.235 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:55.235 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:55.235 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.235 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:55.492 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.492 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88584 00:21:55.492 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:55.492 [2024-07-24 17:22:41.674919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.492 17:22:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:56.058 Malloc0 00:21:56.058 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:56.058 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:56.316 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.573 [2024-07-24 17:22:42.694454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.573 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:56.830 [2024-07-24 17:22:42.918606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:56.830 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=88683 00:21:56.830 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 88683 /var/tmp/bdevperf.sock 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 88683 ']' 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.831 17:22:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:57.762 17:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.762 17:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:57.762 17:22:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:58.019 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:58.585 Nvme0n1 00:21:58.585 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:58.843 Nvme0n1 00:21:58.843 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:21:58.843 17:22:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:00.740 17:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:00.740 17:22:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:00.998 17:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:01.563 17:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:02.498 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:02.498 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:02.498 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.498 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:02.757 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:02.757 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:02.757 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:02.757 17:22:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:03.016 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:03.016 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:03.016 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.016 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:03.274 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.274 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:03.274 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.274 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.532 17:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:03.791 17:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:03.791 17:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:03.791 17:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:04.049 17:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:04.307 17:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:05.683 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.684 17:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:05.942 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.942 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:05.942 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.942 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:06.200 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.200 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:06.200 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.200 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:06.458 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.458 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:06.458 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.458 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:06.717 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.718 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:06.718 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:06.718 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:06.718 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:06.718 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:06.718 17:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:06.983 17:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:07.241 17:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:08.174 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:08.174 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:08.174 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.174 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.432 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.432 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.432 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.432 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.689 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.689 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.948 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.948 17:22:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.206 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.206 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:09.206 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.206 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.464 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.723 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.723 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:09.723 17:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.981 17:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:10.239 17:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.614 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:11.872 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.872 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:11.872 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.872 17:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:12.131 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.131 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:12.131 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.131 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.389 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.389 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:12.389 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.389 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.648 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.648 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:12.648 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.648 17:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.906 17:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:12.906 17:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:12.906 17:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:13.165 17:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:13.423 17:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:14.356 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:14.356 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:14.356 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.356 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.614 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.614 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.614 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.614 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.872 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.872 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.872 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.872 17:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.131 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.131 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.131 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.131 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.389 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.389 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:15.389 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.389 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.647 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.647 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:15.647 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.647 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.905 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.905 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:15.905 17:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:16.163 17:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:16.421 17:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:17.397 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:17.397 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.397 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.397 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.655 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.655 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:17.655 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.655 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.913 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.913 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.913 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.913 17:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.171 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.172 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.172 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.172 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.430 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.430 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:18.430 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.430 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.688 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.688 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:18.688 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.688 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.946 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.946 17:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:19.204 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:19.204 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:19.463 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:19.721 17:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:20.655 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:20.655 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:20.655 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.655 17:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.914 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.914 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:20.914 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.914 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.172 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.172 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.172 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.172 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.450 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.450 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.450 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.450 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.718 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.718 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:21.718 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.718 17:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.975 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.975 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:21.975 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.976 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.233 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.233 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:22.234 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:22.492 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:22.750 17:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:23.784 17:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:23.784 17:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:23.784 17:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.784 17:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.042 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.042 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.042 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.042 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.299 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.299 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.299 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.299 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.557 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.557 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.557 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.557 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.815 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.815 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:24.815 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.815 17:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:25.073 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:25.639 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:25.639 17:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:27.014 17:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:27.014 17:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:27.014 17:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.014 17:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.014 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.014 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:27.014 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.014 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.273 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.273 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.273 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.273 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.530 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.530 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.530 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.530 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.788 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.788 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.788 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.788 17:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.045 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.045 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.045 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.045 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.304 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.304 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:28.304 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:28.561 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:28.819 17:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:29.752 17:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:29.752 17:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:29.752 17:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.752 17:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.010 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.010 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:30.010 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.010 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.268 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.268 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.268 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.268 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:30.526 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.526 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:30.526 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.526 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:30.784 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.784 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:30.784 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:30.784 17:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.043 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.043 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:31.043 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.043 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 88683 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88683 ']' 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88683 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88683 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88683' 00:22:31.356 killing process with pid 88683 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88683 00:22:31.356 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88683 00:22:31.356 Connection closed with partial response: 00:22:31.356 00:22:31.356 00:22:31.634 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 88683 00:22:31.634 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:31.634 [2024-07-24 17:22:42.983588] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:31.634 [2024-07-24 17:22:42.983697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88683 ] 00:22:31.634 [2024-07-24 17:22:43.116302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.634 [2024-07-24 17:22:43.230623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.634 Running I/O for 90 seconds... 00:22:31.634 [2024-07-24 17:22:59.244745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.634 [2024-07-24 17:22:59.244816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.244888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.244908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.244928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.244941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.244960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.244973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.244992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.634 [2024-07-24 17:22:59.245607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.634 [2024-07-24 17:22:59.245626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.245967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.245980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.246362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.246948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.635 [2024-07-24 17:22:59.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.635 [2024-07-24 17:22:59.247744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.635 [2024-07-24 17:22:59.247831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.247858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.247873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.247896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.247910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.247932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.247945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.247968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.247982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.248959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.248988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.636 [2024-07-24 17:22:59.249024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.636 [2024-07-24 17:22:59.249645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.636 [2024-07-24 17:22:59.249672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:22:59.249686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:22:59.249724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:22:59.249763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:22:59.249801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:22:59.249856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:22:59.249896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.249935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.249962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.249976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:82520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:22:59.250604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:22:59.250617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.848758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:23:14.848824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.848910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:23:14.848931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.848952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:23:14.848966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.848986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:23:14.849000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.637 [2024-07-24 17:23:14.849422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:23:14.849456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.637 [2024-07-24 17:23:14.849477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.637 [2024-07-24 17:23:14.849491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.849543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.849560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.849580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.849594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.849614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.849628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.849648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.849661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.638 [2024-07-24 17:23:14.851857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.851975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.851989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.852008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.638 [2024-07-24 17:23:14.852029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.638 [2024-07-24 17:23:14.852050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:31.639 [2024-07-24 17:23:14.852342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.639 [2024-07-24 17:23:14.852376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.639 [2024-07-24 17:23:14.852409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.639 [2024-07-24 17:23:14.852443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.639 [2024-07-24 17:23:14.852477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.639 [2024-07-24 17:23:14.852498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.639 [2024-07-24 17:23:14.852523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.639 Received shutdown signal, test time was about 32.350372 seconds 00:22:31.639 00:22:31.639 Latency(us) 00:22:31.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.639 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.639 Verification LBA range: start 0x0 length 0x4000 00:22:31.639 Nvme0n1 : 32.35 9252.01 36.14 0.00 0.00 13806.70 532.48 4026531.84 00:22:31.639 =================================================================================================================== 00:22:31.639 Total : 9252.01 36.14 0.00 0.00 13806.70 532.48 4026531.84 00:22:31.639 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:31.898 rmmod nvme_tcp 00:22:31.898 rmmod nvme_fabrics 00:22:31.898 rmmod nvme_keyring 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 88584 ']' 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 88584 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 88584 ']' 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 88584 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88584 00:22:31.898 killing process with pid 88584 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88584' 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 88584 00:22:31.898 17:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 88584 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:32.157 ************************************ 00:22:32.157 END TEST nvmf_host_multipath_status 00:22:32.157 ************************************ 00:22:32.157 00:22:32.157 real 0m38.275s 00:22:32.157 user 2m4.718s 00:22:32.157 sys 0m9.437s 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.157 ************************************ 00:22:32.157 START TEST nvmf_discovery_remove_ifc 00:22:32.157 ************************************ 00:22:32.157 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:32.416 * Looking for test storage... 00:22:32.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.416 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:32.417 Cannot find device "nvmf_tgt_br" 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:32.417 Cannot find device "nvmf_tgt_br2" 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:32.417 Cannot find device "nvmf_tgt_br" 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:32.417 Cannot find device "nvmf_tgt_br2" 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:32.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:32.417 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:32.417 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:32.676 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:32.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:32.677 00:22:32.677 --- 10.0.0.2 ping statistics --- 00:22:32.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.677 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:32.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:32.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:22:32.677 00:22:32.677 --- 10.0.0.3 ping statistics --- 00:22:32.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.677 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:32.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:22:32.677 00:22:32.677 --- 10.0.0.1 ping statistics --- 00:22:32.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.677 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=89984 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 89984 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 89984 ']' 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.677 17:23:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.677 [2024-07-24 17:23:18.824726] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:32.677 [2024-07-24 17:23:18.824817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.936 [2024-07-24 17:23:18.964820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.936 [2024-07-24 17:23:19.076832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.936 [2024-07-24 17:23:19.077210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.936 [2024-07-24 17:23:19.077231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.936 [2024-07-24 17:23:19.077240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.936 [2024-07-24 17:23:19.077247] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.936 [2024-07-24 17:23:19.077278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.886 [2024-07-24 17:23:19.913931] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.886 [2024-07-24 17:23:19.922046] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:33.886 null0 00:22:33.886 [2024-07-24 17:23:19.953984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.886 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90034 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90034 /tmp/host.sock 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 90034 ']' 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.886 17:23:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:33.886 [2024-07-24 17:23:20.038616] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:22:33.886 [2024-07-24 17:23:20.039277] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90034 ] 00:22:34.145 [2024-07-24 17:23:20.175549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.145 [2024-07-24 17:23:20.283424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.080 17:23:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:35.080 17:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.080 17:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:35.080 17:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.080 17:23:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 [2024-07-24 17:23:22.087037] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.017 [2024-07-24 17:23:22.087069] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.017 [2024-07-24 17:23:22.087087] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.017 [2024-07-24 17:23:22.175195] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:36.017 [2024-07-24 17:23:22.238912] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:36.017 [2024-07-24 17:23:22.238968] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:36.017 [2024-07-24 17:23:22.239022] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:36.017 [2024-07-24 17:23:22.239041] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:36.017 [2024-07-24 17:23:22.239065] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.017 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.017 [2024-07-24 17:23:22.245646] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20c6650 was disconnected and freed. delete nvme_qpair. 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:36.275 17:23:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:37.210 17:23:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:38.585 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:38.585 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.585 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:38.586 17:23:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:39.520 17:23:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.455 17:23:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.391 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.671 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.671 [2024-07-24 17:23:27.667133] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:41.671 [2024-07-24 17:23:27.667238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.671 [2024-07-24 17:23:27.667256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.671 [2024-07-24 17:23:27.667268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.671 [2024-07-24 17:23:27.667278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.671 [2024-07-24 17:23:27.667288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.671 [2024-07-24 17:23:27.667297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.671 [2024-07-24 17:23:27.667307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.671 [2024-07-24 17:23:27.667315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.671 [2024-07-24 17:23:27.667325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:41.671 [2024-07-24 17:23:27.667334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:41.671 [2024-07-24 17:23:27.667343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208f900 is same with the state(5) to be set 00:22:41.671 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.671 17:23:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.671 [2024-07-24 17:23:27.677143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208f900 (9): Bad file descriptor 00:22:41.671 [2024-07-24 17:23:27.687166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.613 [2024-07-24 17:23:28.720165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:42.613 [2024-07-24 17:23:28.720245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x208f900 with addr=10.0.0.2, port=4420 00:22:42.613 [2024-07-24 17:23:28.720267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x208f900 is same with the state(5) to be set 00:22:42.613 [2024-07-24 17:23:28.720311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x208f900 (9): Bad file descriptor 00:22:42.613 [2024-07-24 17:23:28.720776] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:42.613 [2024-07-24 17:23:28.720813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:42.613 [2024-07-24 17:23:28.720825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:42.613 [2024-07-24 17:23:28.720835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:42.613 [2024-07-24 17:23:28.720859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:42.613 [2024-07-24 17:23:28.720871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.613 17:23:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:43.549 [2024-07-24 17:23:29.720914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:43.549 [2024-07-24 17:23:29.720985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:43.549 [2024-07-24 17:23:29.721012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:43.549 [2024-07-24 17:23:29.721022] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:43.549 [2024-07-24 17:23:29.721046] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:43.549 [2024-07-24 17:23:29.721076] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:43.549 [2024-07-24 17:23:29.721158] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.549 [2024-07-24 17:23:29.721174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.549 [2024-07-24 17:23:29.721188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.549 [2024-07-24 17:23:29.721196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.549 [2024-07-24 17:23:29.721206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.549 [2024-07-24 17:23:29.721214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.549 [2024-07-24 17:23:29.721224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.549 [2024-07-24 17:23:29.721232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.549 [2024-07-24 17:23:29.721242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:43.549 [2024-07-24 17:23:29.721250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.549 [2024-07-24 17:23:29.721259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:43.549 [2024-07-24 17:23:29.721299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20323e0 (9): Bad file descriptor 00:22:43.549 [2024-07-24 17:23:29.722290] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:43.549 [2024-07-24 17:23:29.722315] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.549 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.807 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.808 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.808 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.808 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.808 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:43.808 17:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:44.742 17:23:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.676 [2024-07-24 17:23:31.728860] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.676 [2024-07-24 17:23:31.728891] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.676 [2024-07-24 17:23:31.728910] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.676 [2024-07-24 17:23:31.814971] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:45.677 [2024-07-24 17:23:31.871335] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:45.677 [2024-07-24 17:23:31.871389] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:45.677 [2024-07-24 17:23:31.871449] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:45.677 [2024-07-24 17:23:31.871495] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:45.677 [2024-07-24 17:23:31.871503] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.677 [2024-07-24 17:23:31.877376] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20ab390 was disconnected and freed. delete nvme_qpair. 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.935 17:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90034 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 90034 ']' 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 90034 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90034 00:22:45.935 killing process with pid 90034 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90034' 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 90034 00:22:45.935 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 90034 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:46.194 rmmod nvme_tcp 00:22:46.194 rmmod nvme_fabrics 00:22:46.194 rmmod nvme_keyring 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 89984 ']' 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 89984 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 89984 ']' 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 89984 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89984 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89984' 00:22:46.194 killing process with pid 89984 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 89984 00:22:46.194 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 89984 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:46.452 00:22:46.452 real 0m14.309s 00:22:46.452 user 0m25.655s 00:22:46.452 sys 0m1.642s 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.452 ************************************ 00:22:46.452 END TEST nvmf_discovery_remove_ifc 00:22:46.452 ************************************ 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.452 ************************************ 00:22:46.452 START TEST nvmf_identify_kernel_target 00:22:46.452 ************************************ 00:22:46.452 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:46.710 * Looking for test storage... 00:22:46.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:46.710 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:46.711 Cannot find device "nvmf_tgt_br" 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.711 Cannot find device "nvmf_tgt_br2" 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:46.711 Cannot find device "nvmf_tgt_br" 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:46.711 Cannot find device "nvmf_tgt_br2" 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:46.711 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:46.970 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:46.970 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:46.970 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:46.970 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:46.970 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:46.970 17:23:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:46.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:22:46.970 00:22:46.970 --- 10.0.0.2 ping statistics --- 00:22:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.970 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:46.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:46.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:22:46.970 00:22:46.970 --- 10.0.0.3 ping statistics --- 00:22:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.970 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:46.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:22:46.970 00:22:46.970 --- 10.0.0.1 ping statistics --- 00:22:46.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.970 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:46.970 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:47.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:47.536 Waiting for block devices as requested 00:22:47.536 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:47.536 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:47.536 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:47.794 No valid GPT data, bailing 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:47.794 No valid GPT data, bailing 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:47.794 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:47.795 No valid GPT data, bailing 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:47.795 No valid GPT data, bailing 00:22:47.795 17:23:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:47.795 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:48.053 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -a 10.0.0.1 -t tcp -s 4420 00:22:48.053 00:22:48.053 Discovery Log Number of Records 2, Generation counter 2 00:22:48.053 =====Discovery Log Entry 0====== 00:22:48.053 trtype: tcp 00:22:48.053 adrfam: ipv4 00:22:48.053 subtype: current discovery subsystem 00:22:48.053 treq: not specified, sq flow control disable supported 00:22:48.053 portid: 1 00:22:48.053 trsvcid: 4420 00:22:48.053 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:48.053 traddr: 10.0.0.1 00:22:48.053 eflags: none 00:22:48.053 sectype: none 00:22:48.053 =====Discovery Log Entry 1====== 00:22:48.053 trtype: tcp 00:22:48.053 adrfam: ipv4 00:22:48.053 subtype: nvme subsystem 00:22:48.053 treq: not specified, sq flow control disable supported 00:22:48.053 portid: 1 00:22:48.053 trsvcid: 4420 00:22:48.053 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:48.053 traddr: 10.0.0.1 00:22:48.053 eflags: none 00:22:48.053 sectype: none 00:22:48.053 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:48.053 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:48.053 ===================================================== 00:22:48.053 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:48.053 ===================================================== 00:22:48.053 Controller Capabilities/Features 00:22:48.053 ================================ 00:22:48.053 Vendor ID: 0000 00:22:48.053 Subsystem Vendor ID: 0000 00:22:48.053 Serial Number: 38813233dfda41004c68 00:22:48.053 Model Number: Linux 00:22:48.053 Firmware Version: 6.7.0-68 00:22:48.053 Recommended Arb Burst: 0 00:22:48.053 IEEE OUI Identifier: 00 00 00 00:22:48.053 Multi-path I/O 00:22:48.053 May have multiple subsystem ports: No 00:22:48.053 May have multiple controllers: No 00:22:48.053 Associated with SR-IOV VF: No 00:22:48.053 Max Data Transfer Size: Unlimited 00:22:48.053 Max Number of Namespaces: 0 00:22:48.053 Max Number of I/O Queues: 1024 00:22:48.053 NVMe Specification Version (VS): 1.3 00:22:48.054 NVMe Specification Version (Identify): 1.3 00:22:48.054 Maximum Queue Entries: 1024 00:22:48.054 Contiguous Queues Required: No 00:22:48.054 Arbitration Mechanisms Supported 00:22:48.054 Weighted Round Robin: Not Supported 00:22:48.054 Vendor Specific: Not Supported 00:22:48.054 Reset Timeout: 7500 ms 00:22:48.054 Doorbell Stride: 4 bytes 00:22:48.054 NVM Subsystem Reset: Not Supported 00:22:48.054 Command Sets Supported 00:22:48.054 NVM Command Set: Supported 00:22:48.054 Boot Partition: Not Supported 00:22:48.054 Memory Page Size Minimum: 4096 bytes 00:22:48.054 Memory Page Size Maximum: 4096 bytes 00:22:48.054 Persistent Memory Region: Not Supported 00:22:48.054 Optional Asynchronous Events Supported 00:22:48.054 Namespace Attribute Notices: Not Supported 00:22:48.054 Firmware Activation Notices: Not Supported 00:22:48.054 ANA Change Notices: Not Supported 00:22:48.054 PLE Aggregate Log Change Notices: Not Supported 00:22:48.054 LBA Status Info Alert Notices: Not Supported 00:22:48.054 EGE Aggregate Log Change Notices: Not Supported 00:22:48.054 Normal NVM Subsystem Shutdown event: Not Supported 00:22:48.054 Zone Descriptor Change Notices: Not Supported 00:22:48.054 Discovery Log Change Notices: Supported 00:22:48.054 Controller Attributes 00:22:48.054 128-bit Host Identifier: Not Supported 00:22:48.054 Non-Operational Permissive Mode: Not Supported 00:22:48.054 NVM Sets: Not Supported 00:22:48.054 Read Recovery Levels: Not Supported 00:22:48.054 Endurance Groups: Not Supported 00:22:48.054 Predictable Latency Mode: Not Supported 00:22:48.054 Traffic Based Keep ALive: Not Supported 00:22:48.054 Namespace Granularity: Not Supported 00:22:48.054 SQ Associations: Not Supported 00:22:48.054 UUID List: Not Supported 00:22:48.054 Multi-Domain Subsystem: Not Supported 00:22:48.054 Fixed Capacity Management: Not Supported 00:22:48.054 Variable Capacity Management: Not Supported 00:22:48.054 Delete Endurance Group: Not Supported 00:22:48.054 Delete NVM Set: Not Supported 00:22:48.054 Extended LBA Formats Supported: Not Supported 00:22:48.054 Flexible Data Placement Supported: Not Supported 00:22:48.054 00:22:48.054 Controller Memory Buffer Support 00:22:48.054 ================================ 00:22:48.054 Supported: No 00:22:48.054 00:22:48.054 Persistent Memory Region Support 00:22:48.054 ================================ 00:22:48.054 Supported: No 00:22:48.054 00:22:48.054 Admin Command Set Attributes 00:22:48.054 ============================ 00:22:48.054 Security Send/Receive: Not Supported 00:22:48.054 Format NVM: Not Supported 00:22:48.054 Firmware Activate/Download: Not Supported 00:22:48.054 Namespace Management: Not Supported 00:22:48.054 Device Self-Test: Not Supported 00:22:48.054 Directives: Not Supported 00:22:48.054 NVMe-MI: Not Supported 00:22:48.054 Virtualization Management: Not Supported 00:22:48.054 Doorbell Buffer Config: Not Supported 00:22:48.054 Get LBA Status Capability: Not Supported 00:22:48.054 Command & Feature Lockdown Capability: Not Supported 00:22:48.054 Abort Command Limit: 1 00:22:48.054 Async Event Request Limit: 1 00:22:48.054 Number of Firmware Slots: N/A 00:22:48.054 Firmware Slot 1 Read-Only: N/A 00:22:48.054 Firmware Activation Without Reset: N/A 00:22:48.054 Multiple Update Detection Support: N/A 00:22:48.054 Firmware Update Granularity: No Information Provided 00:22:48.054 Per-Namespace SMART Log: No 00:22:48.054 Asymmetric Namespace Access Log Page: Not Supported 00:22:48.054 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:48.054 Command Effects Log Page: Not Supported 00:22:48.054 Get Log Page Extended Data: Supported 00:22:48.054 Telemetry Log Pages: Not Supported 00:22:48.054 Persistent Event Log Pages: Not Supported 00:22:48.054 Supported Log Pages Log Page: May Support 00:22:48.054 Commands Supported & Effects Log Page: Not Supported 00:22:48.054 Feature Identifiers & Effects Log Page:May Support 00:22:48.054 NVMe-MI Commands & Effects Log Page: May Support 00:22:48.054 Data Area 4 for Telemetry Log: Not Supported 00:22:48.054 Error Log Page Entries Supported: 1 00:22:48.054 Keep Alive: Not Supported 00:22:48.054 00:22:48.054 NVM Command Set Attributes 00:22:48.054 ========================== 00:22:48.054 Submission Queue Entry Size 00:22:48.054 Max: 1 00:22:48.054 Min: 1 00:22:48.054 Completion Queue Entry Size 00:22:48.054 Max: 1 00:22:48.054 Min: 1 00:22:48.054 Number of Namespaces: 0 00:22:48.054 Compare Command: Not Supported 00:22:48.054 Write Uncorrectable Command: Not Supported 00:22:48.054 Dataset Management Command: Not Supported 00:22:48.054 Write Zeroes Command: Not Supported 00:22:48.054 Set Features Save Field: Not Supported 00:22:48.054 Reservations: Not Supported 00:22:48.054 Timestamp: Not Supported 00:22:48.054 Copy: Not Supported 00:22:48.054 Volatile Write Cache: Not Present 00:22:48.054 Atomic Write Unit (Normal): 1 00:22:48.054 Atomic Write Unit (PFail): 1 00:22:48.054 Atomic Compare & Write Unit: 1 00:22:48.054 Fused Compare & Write: Not Supported 00:22:48.054 Scatter-Gather List 00:22:48.054 SGL Command Set: Supported 00:22:48.054 SGL Keyed: Not Supported 00:22:48.054 SGL Bit Bucket Descriptor: Not Supported 00:22:48.054 SGL Metadata Pointer: Not Supported 00:22:48.054 Oversized SGL: Not Supported 00:22:48.054 SGL Metadata Address: Not Supported 00:22:48.054 SGL Offset: Supported 00:22:48.054 Transport SGL Data Block: Not Supported 00:22:48.054 Replay Protected Memory Block: Not Supported 00:22:48.054 00:22:48.054 Firmware Slot Information 00:22:48.054 ========================= 00:22:48.054 Active slot: 0 00:22:48.054 00:22:48.054 00:22:48.054 Error Log 00:22:48.054 ========= 00:22:48.054 00:22:48.054 Active Namespaces 00:22:48.054 ================= 00:22:48.054 Discovery Log Page 00:22:48.054 ================== 00:22:48.054 Generation Counter: 2 00:22:48.054 Number of Records: 2 00:22:48.054 Record Format: 0 00:22:48.054 00:22:48.054 Discovery Log Entry 0 00:22:48.054 ---------------------- 00:22:48.054 Transport Type: 3 (TCP) 00:22:48.054 Address Family: 1 (IPv4) 00:22:48.054 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:48.054 Entry Flags: 00:22:48.054 Duplicate Returned Information: 0 00:22:48.054 Explicit Persistent Connection Support for Discovery: 0 00:22:48.054 Transport Requirements: 00:22:48.054 Secure Channel: Not Specified 00:22:48.054 Port ID: 1 (0x0001) 00:22:48.054 Controller ID: 65535 (0xffff) 00:22:48.054 Admin Max SQ Size: 32 00:22:48.054 Transport Service Identifier: 4420 00:22:48.054 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:48.054 Transport Address: 10.0.0.1 00:22:48.054 Discovery Log Entry 1 00:22:48.054 ---------------------- 00:22:48.054 Transport Type: 3 (TCP) 00:22:48.054 Address Family: 1 (IPv4) 00:22:48.054 Subsystem Type: 2 (NVM Subsystem) 00:22:48.054 Entry Flags: 00:22:48.054 Duplicate Returned Information: 0 00:22:48.054 Explicit Persistent Connection Support for Discovery: 0 00:22:48.054 Transport Requirements: 00:22:48.054 Secure Channel: Not Specified 00:22:48.054 Port ID: 1 (0x0001) 00:22:48.054 Controller ID: 65535 (0xffff) 00:22:48.054 Admin Max SQ Size: 32 00:22:48.054 Transport Service Identifier: 4420 00:22:48.054 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:48.054 Transport Address: 10.0.0.1 00:22:48.054 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:48.312 get_feature(0x01) failed 00:22:48.312 get_feature(0x02) failed 00:22:48.312 get_feature(0x04) failed 00:22:48.312 ===================================================== 00:22:48.312 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:48.312 ===================================================== 00:22:48.312 Controller Capabilities/Features 00:22:48.312 ================================ 00:22:48.312 Vendor ID: 0000 00:22:48.312 Subsystem Vendor ID: 0000 00:22:48.312 Serial Number: 3b9b131bcb151f8cd37b 00:22:48.312 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:48.312 Firmware Version: 6.7.0-68 00:22:48.312 Recommended Arb Burst: 6 00:22:48.312 IEEE OUI Identifier: 00 00 00 00:22:48.312 Multi-path I/O 00:22:48.312 May have multiple subsystem ports: Yes 00:22:48.312 May have multiple controllers: Yes 00:22:48.312 Associated with SR-IOV VF: No 00:22:48.312 Max Data Transfer Size: Unlimited 00:22:48.312 Max Number of Namespaces: 1024 00:22:48.312 Max Number of I/O Queues: 128 00:22:48.312 NVMe Specification Version (VS): 1.3 00:22:48.312 NVMe Specification Version (Identify): 1.3 00:22:48.312 Maximum Queue Entries: 1024 00:22:48.312 Contiguous Queues Required: No 00:22:48.312 Arbitration Mechanisms Supported 00:22:48.312 Weighted Round Robin: Not Supported 00:22:48.312 Vendor Specific: Not Supported 00:22:48.312 Reset Timeout: 7500 ms 00:22:48.312 Doorbell Stride: 4 bytes 00:22:48.312 NVM Subsystem Reset: Not Supported 00:22:48.312 Command Sets Supported 00:22:48.312 NVM Command Set: Supported 00:22:48.312 Boot Partition: Not Supported 00:22:48.312 Memory Page Size Minimum: 4096 bytes 00:22:48.312 Memory Page Size Maximum: 4096 bytes 00:22:48.312 Persistent Memory Region: Not Supported 00:22:48.312 Optional Asynchronous Events Supported 00:22:48.312 Namespace Attribute Notices: Supported 00:22:48.312 Firmware Activation Notices: Not Supported 00:22:48.312 ANA Change Notices: Supported 00:22:48.312 PLE Aggregate Log Change Notices: Not Supported 00:22:48.312 LBA Status Info Alert Notices: Not Supported 00:22:48.312 EGE Aggregate Log Change Notices: Not Supported 00:22:48.312 Normal NVM Subsystem Shutdown event: Not Supported 00:22:48.312 Zone Descriptor Change Notices: Not Supported 00:22:48.312 Discovery Log Change Notices: Not Supported 00:22:48.312 Controller Attributes 00:22:48.312 128-bit Host Identifier: Supported 00:22:48.313 Non-Operational Permissive Mode: Not Supported 00:22:48.313 NVM Sets: Not Supported 00:22:48.313 Read Recovery Levels: Not Supported 00:22:48.313 Endurance Groups: Not Supported 00:22:48.313 Predictable Latency Mode: Not Supported 00:22:48.313 Traffic Based Keep ALive: Supported 00:22:48.313 Namespace Granularity: Not Supported 00:22:48.313 SQ Associations: Not Supported 00:22:48.313 UUID List: Not Supported 00:22:48.313 Multi-Domain Subsystem: Not Supported 00:22:48.313 Fixed Capacity Management: Not Supported 00:22:48.313 Variable Capacity Management: Not Supported 00:22:48.313 Delete Endurance Group: Not Supported 00:22:48.313 Delete NVM Set: Not Supported 00:22:48.313 Extended LBA Formats Supported: Not Supported 00:22:48.313 Flexible Data Placement Supported: Not Supported 00:22:48.313 00:22:48.313 Controller Memory Buffer Support 00:22:48.313 ================================ 00:22:48.313 Supported: No 00:22:48.313 00:22:48.313 Persistent Memory Region Support 00:22:48.313 ================================ 00:22:48.313 Supported: No 00:22:48.313 00:22:48.313 Admin Command Set Attributes 00:22:48.313 ============================ 00:22:48.313 Security Send/Receive: Not Supported 00:22:48.313 Format NVM: Not Supported 00:22:48.313 Firmware Activate/Download: Not Supported 00:22:48.313 Namespace Management: Not Supported 00:22:48.313 Device Self-Test: Not Supported 00:22:48.313 Directives: Not Supported 00:22:48.313 NVMe-MI: Not Supported 00:22:48.313 Virtualization Management: Not Supported 00:22:48.313 Doorbell Buffer Config: Not Supported 00:22:48.313 Get LBA Status Capability: Not Supported 00:22:48.313 Command & Feature Lockdown Capability: Not Supported 00:22:48.313 Abort Command Limit: 4 00:22:48.313 Async Event Request Limit: 4 00:22:48.313 Number of Firmware Slots: N/A 00:22:48.313 Firmware Slot 1 Read-Only: N/A 00:22:48.313 Firmware Activation Without Reset: N/A 00:22:48.313 Multiple Update Detection Support: N/A 00:22:48.313 Firmware Update Granularity: No Information Provided 00:22:48.313 Per-Namespace SMART Log: Yes 00:22:48.313 Asymmetric Namespace Access Log Page: Supported 00:22:48.313 ANA Transition Time : 10 sec 00:22:48.313 00:22:48.313 Asymmetric Namespace Access Capabilities 00:22:48.313 ANA Optimized State : Supported 00:22:48.313 ANA Non-Optimized State : Supported 00:22:48.313 ANA Inaccessible State : Supported 00:22:48.313 ANA Persistent Loss State : Supported 00:22:48.313 ANA Change State : Supported 00:22:48.313 ANAGRPID is not changed : No 00:22:48.313 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:48.313 00:22:48.313 ANA Group Identifier Maximum : 128 00:22:48.313 Number of ANA Group Identifiers : 128 00:22:48.313 Max Number of Allowed Namespaces : 1024 00:22:48.313 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:48.313 Command Effects Log Page: Supported 00:22:48.313 Get Log Page Extended Data: Supported 00:22:48.313 Telemetry Log Pages: Not Supported 00:22:48.313 Persistent Event Log Pages: Not Supported 00:22:48.313 Supported Log Pages Log Page: May Support 00:22:48.313 Commands Supported & Effects Log Page: Not Supported 00:22:48.313 Feature Identifiers & Effects Log Page:May Support 00:22:48.313 NVMe-MI Commands & Effects Log Page: May Support 00:22:48.313 Data Area 4 for Telemetry Log: Not Supported 00:22:48.313 Error Log Page Entries Supported: 128 00:22:48.313 Keep Alive: Supported 00:22:48.313 Keep Alive Granularity: 1000 ms 00:22:48.313 00:22:48.313 NVM Command Set Attributes 00:22:48.313 ========================== 00:22:48.313 Submission Queue Entry Size 00:22:48.313 Max: 64 00:22:48.313 Min: 64 00:22:48.313 Completion Queue Entry Size 00:22:48.313 Max: 16 00:22:48.313 Min: 16 00:22:48.313 Number of Namespaces: 1024 00:22:48.313 Compare Command: Not Supported 00:22:48.313 Write Uncorrectable Command: Not Supported 00:22:48.313 Dataset Management Command: Supported 00:22:48.313 Write Zeroes Command: Supported 00:22:48.313 Set Features Save Field: Not Supported 00:22:48.313 Reservations: Not Supported 00:22:48.313 Timestamp: Not Supported 00:22:48.313 Copy: Not Supported 00:22:48.313 Volatile Write Cache: Present 00:22:48.313 Atomic Write Unit (Normal): 1 00:22:48.313 Atomic Write Unit (PFail): 1 00:22:48.313 Atomic Compare & Write Unit: 1 00:22:48.313 Fused Compare & Write: Not Supported 00:22:48.313 Scatter-Gather List 00:22:48.313 SGL Command Set: Supported 00:22:48.313 SGL Keyed: Not Supported 00:22:48.313 SGL Bit Bucket Descriptor: Not Supported 00:22:48.313 SGL Metadata Pointer: Not Supported 00:22:48.313 Oversized SGL: Not Supported 00:22:48.313 SGL Metadata Address: Not Supported 00:22:48.313 SGL Offset: Supported 00:22:48.313 Transport SGL Data Block: Not Supported 00:22:48.313 Replay Protected Memory Block: Not Supported 00:22:48.313 00:22:48.313 Firmware Slot Information 00:22:48.313 ========================= 00:22:48.313 Active slot: 0 00:22:48.313 00:22:48.313 Asymmetric Namespace Access 00:22:48.313 =========================== 00:22:48.313 Change Count : 0 00:22:48.313 Number of ANA Group Descriptors : 1 00:22:48.313 ANA Group Descriptor : 0 00:22:48.313 ANA Group ID : 1 00:22:48.313 Number of NSID Values : 1 00:22:48.313 Change Count : 0 00:22:48.313 ANA State : 1 00:22:48.313 Namespace Identifier : 1 00:22:48.313 00:22:48.313 Commands Supported and Effects 00:22:48.313 ============================== 00:22:48.313 Admin Commands 00:22:48.313 -------------- 00:22:48.313 Get Log Page (02h): Supported 00:22:48.313 Identify (06h): Supported 00:22:48.313 Abort (08h): Supported 00:22:48.313 Set Features (09h): Supported 00:22:48.313 Get Features (0Ah): Supported 00:22:48.313 Asynchronous Event Request (0Ch): Supported 00:22:48.313 Keep Alive (18h): Supported 00:22:48.313 I/O Commands 00:22:48.313 ------------ 00:22:48.313 Flush (00h): Supported 00:22:48.313 Write (01h): Supported LBA-Change 00:22:48.313 Read (02h): Supported 00:22:48.313 Write Zeroes (08h): Supported LBA-Change 00:22:48.313 Dataset Management (09h): Supported 00:22:48.313 00:22:48.313 Error Log 00:22:48.313 ========= 00:22:48.313 Entry: 0 00:22:48.313 Error Count: 0x3 00:22:48.313 Submission Queue Id: 0x0 00:22:48.313 Command Id: 0x5 00:22:48.313 Phase Bit: 0 00:22:48.313 Status Code: 0x2 00:22:48.313 Status Code Type: 0x0 00:22:48.313 Do Not Retry: 1 00:22:48.313 Error Location: 0x28 00:22:48.313 LBA: 0x0 00:22:48.313 Namespace: 0x0 00:22:48.313 Vendor Log Page: 0x0 00:22:48.313 ----------- 00:22:48.313 Entry: 1 00:22:48.313 Error Count: 0x2 00:22:48.313 Submission Queue Id: 0x0 00:22:48.313 Command Id: 0x5 00:22:48.313 Phase Bit: 0 00:22:48.313 Status Code: 0x2 00:22:48.313 Status Code Type: 0x0 00:22:48.313 Do Not Retry: 1 00:22:48.313 Error Location: 0x28 00:22:48.313 LBA: 0x0 00:22:48.313 Namespace: 0x0 00:22:48.313 Vendor Log Page: 0x0 00:22:48.313 ----------- 00:22:48.313 Entry: 2 00:22:48.313 Error Count: 0x1 00:22:48.313 Submission Queue Id: 0x0 00:22:48.313 Command Id: 0x4 00:22:48.313 Phase Bit: 0 00:22:48.313 Status Code: 0x2 00:22:48.313 Status Code Type: 0x0 00:22:48.313 Do Not Retry: 1 00:22:48.313 Error Location: 0x28 00:22:48.313 LBA: 0x0 00:22:48.313 Namespace: 0x0 00:22:48.313 Vendor Log Page: 0x0 00:22:48.313 00:22:48.313 Number of Queues 00:22:48.313 ================ 00:22:48.313 Number of I/O Submission Queues: 128 00:22:48.313 Number of I/O Completion Queues: 128 00:22:48.313 00:22:48.313 ZNS Specific Controller Data 00:22:48.313 ============================ 00:22:48.313 Zone Append Size Limit: 0 00:22:48.313 00:22:48.313 00:22:48.313 Active Namespaces 00:22:48.313 ================= 00:22:48.313 get_feature(0x05) failed 00:22:48.313 Namespace ID:1 00:22:48.313 Command Set Identifier: NVM (00h) 00:22:48.313 Deallocate: Supported 00:22:48.313 Deallocated/Unwritten Error: Not Supported 00:22:48.313 Deallocated Read Value: Unknown 00:22:48.313 Deallocate in Write Zeroes: Not Supported 00:22:48.313 Deallocated Guard Field: 0xFFFF 00:22:48.313 Flush: Supported 00:22:48.313 Reservation: Not Supported 00:22:48.313 Namespace Sharing Capabilities: Multiple Controllers 00:22:48.313 Size (in LBAs): 1310720 (5GiB) 00:22:48.313 Capacity (in LBAs): 1310720 (5GiB) 00:22:48.313 Utilization (in LBAs): 1310720 (5GiB) 00:22:48.313 UUID: d29afdc5-48a7-4ad6-ad8d-8b1654438393 00:22:48.314 Thin Provisioning: Not Supported 00:22:48.314 Per-NS Atomic Units: Yes 00:22:48.314 Atomic Boundary Size (Normal): 0 00:22:48.314 Atomic Boundary Size (PFail): 0 00:22:48.314 Atomic Boundary Offset: 0 00:22:48.314 NGUID/EUI64 Never Reused: No 00:22:48.314 ANA group ID: 1 00:22:48.314 Namespace Write Protected: No 00:22:48.314 Number of LBA Formats: 1 00:22:48.314 Current LBA Format: LBA Format #00 00:22:48.314 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:22:48.314 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:48.314 rmmod nvme_tcp 00:22:48.314 rmmod nvme_fabrics 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:48.314 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:22:48.571 17:23:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:22:49.137 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:49.137 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:22:49.395 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:22:49.395 ************************************ 00:22:49.395 END TEST nvmf_identify_kernel_target 00:22:49.395 ************************************ 00:22:49.395 00:22:49.395 real 0m2.791s 00:22:49.395 user 0m0.960s 00:22:49.395 sys 0m1.300s 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.395 ************************************ 00:22:49.395 START TEST nvmf_auth_host 00:22:49.395 ************************************ 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:22:49.395 * Looking for test storage... 00:22:49.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.395 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:49.653 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:49.653 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:49.653 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:49.653 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:49.653 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.653 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:49.654 Cannot find device "nvmf_tgt_br" 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # true 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.654 Cannot find device "nvmf_tgt_br2" 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # true 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:49.654 Cannot find device "nvmf_tgt_br" 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # true 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:49.654 Cannot find device "nvmf_tgt_br2" 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # true 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:49.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:49.654 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:49.654 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:49.912 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:49.912 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:49.912 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:49.912 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:49.912 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:49.912 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:49.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:22:49.913 00:22:49.913 --- 10.0.0.2 ping statistics --- 00:22:49.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.913 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:49.913 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:49.913 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:22:49.913 00:22:49.913 --- 10.0.0.3 ping statistics --- 00:22:49.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.913 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:49.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:49.913 00:22:49.913 --- 10.0.0.1 ping statistics --- 00:22:49.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.913 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@433 -- # return 0 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=90923 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 90923 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90923 ']' 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:49.913 17:23:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb73ab72dbbaccdd44b9dd5588a8876c 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.bac 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb73ab72dbbaccdd44b9dd5588a8876c 0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb73ab72dbbaccdd44b9dd5588a8876c 0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb73ab72dbbaccdd44b9dd5588a8876c 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.bac 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.bac 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.bac 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1106289c2ae63ebe599501e4487f6adc5a96134bc3ace3e87bbdbc6a29da1602 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.z4c 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1106289c2ae63ebe599501e4487f6adc5a96134bc3ace3e87bbdbc6a29da1602 3 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1106289c2ae63ebe599501e4487f6adc5a96134bc3ace3e87bbdbc6a29da1602 3 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1106289c2ae63ebe599501e4487f6adc5a96134bc3ace3e87bbdbc6a29da1602 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.z4c 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.z4c 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.z4c 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9b5b402fc78df4d185423e6a661bcd23d748867035e287bb 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BOV 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9b5b402fc78df4d185423e6a661bcd23d748867035e287bb 0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9b5b402fc78df4d185423e6a661bcd23d748867035e287bb 0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9b5b402fc78df4d185423e6a661bcd23d748867035e287bb 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BOV 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BOV 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BOV 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:51.288 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b397f79c7b0337c91aa602ce9abff84ec8c92050c4afe5bc 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KvC 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b397f79c7b0337c91aa602ce9abff84ec8c92050c4afe5bc 2 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b397f79c7b0337c91aa602ce9abff84ec8c92050c4afe5bc 2 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b397f79c7b0337c91aa602ce9abff84ec8c92050c4afe5bc 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KvC 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KvC 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.KvC 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4b84c80292c80d1f49096422bf7e68b7 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bNc 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4b84c80292c80d1f49096422bf7e68b7 1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4b84c80292c80d1f49096422bf7e68b7 1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4b84c80292c80d1f49096422bf7e68b7 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bNc 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bNc 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.bNc 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e36867b65267c68b9702c5330abd126 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.abZ 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e36867b65267c68b9702c5330abd126 1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e36867b65267c68b9702c5330abd126 1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e36867b65267c68b9702c5330abd126 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.289 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.abZ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.abZ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.abZ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6ef69e0d42035e729beb35549082944c0aa259ac2cd0bb7c 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GqL 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6ef69e0d42035e729beb35549082944c0aa259ac2cd0bb7c 2 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6ef69e0d42035e729beb35549082944c0aa259ac2cd0bb7c 2 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6ef69e0d42035e729beb35549082944c0aa259ac2cd0bb7c 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GqL 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GqL 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GqL 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20f9e668c73d2fc8e9a70575c004d248 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.uHG 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20f9e668c73d2fc8e9a70575c004d248 0 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20f9e668c73d2fc8e9a70575c004d248 0 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20f9e668c73d2fc8e9a70575c004d248 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.uHG 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.uHG 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.uHG 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=db6d32ca838098ef118ab8956f6cba50ca9b3632b7354f40a13a5e6c12223b79 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yjQ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key db6d32ca838098ef118ab8956f6cba50ca9b3632b7354f40a13a5e6c12223b79 3 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 db6d32ca838098ef118ab8956f6cba50ca9b3632b7354f40a13a5e6c12223b79 3 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=db6d32ca838098ef118ab8956f6cba50ca9b3632b7354f40a13a5e6c12223b79 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yjQ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yjQ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yjQ 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 90923 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 90923 ']' 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:51.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:51.548 17:23:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.bac 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.z4c ]] 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.z4c 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BOV 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.806 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.KvC ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KvC 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.bNc 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.abZ ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.abZ 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GqL 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.uHG ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.uHG 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yjQ 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:52.064 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:52.065 17:23:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:22:52.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:52.326 Waiting for block devices as requested 00:22:52.590 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:52.590 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:22:53.160 No valid GPT data, bailing 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:53.160 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:22:53.161 No valid GPT data, bailing 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:22:53.161 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:22:53.419 No valid GPT data, bailing 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:22:53.419 No valid GPT data, bailing 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:22:53.419 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -a 10.0.0.1 -t tcp -s 4420 00:22:53.420 00:22:53.420 Discovery Log Number of Records 2, Generation counter 2 00:22:53.420 =====Discovery Log Entry 0====== 00:22:53.420 trtype: tcp 00:22:53.420 adrfam: ipv4 00:22:53.420 subtype: current discovery subsystem 00:22:53.420 treq: not specified, sq flow control disable supported 00:22:53.420 portid: 1 00:22:53.420 trsvcid: 4420 00:22:53.420 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:53.420 traddr: 10.0.0.1 00:22:53.420 eflags: none 00:22:53.420 sectype: none 00:22:53.420 =====Discovery Log Entry 1====== 00:22:53.420 trtype: tcp 00:22:53.420 adrfam: ipv4 00:22:53.420 subtype: nvme subsystem 00:22:53.420 treq: not specified, sq flow control disable supported 00:22:53.420 portid: 1 00:22:53.420 trsvcid: 4420 00:22:53.420 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:53.420 traddr: 10.0.0.1 00:22:53.420 eflags: none 00:22:53.420 sectype: none 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.420 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 nvme0n1 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.678 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.679 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.679 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.679 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.679 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.679 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.679 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.937 nvme0n1 00:22:53.937 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.937 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:53.937 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.937 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:53.937 17:23:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:53.937 nvme0n1 00:22:53.937 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.195 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 nvme0n1 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:54.196 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 nvme0n1 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.455 nvme0n1 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.455 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:54.713 17:23:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:54.971 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:54.971 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.972 nvme0n1 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:54.972 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.231 nvme0n1 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:22:55.231 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.232 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.491 nvme0n1 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.491 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.750 nvme0n1 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:22:55.750 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.751 nvme0n1 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.751 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.010 17:23:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:56.010 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.577 nvme0n1 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.577 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.836 17:23:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.095 nvme0n1 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.095 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.353 nvme0n1 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:57.353 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.354 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.672 nvme0n1 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.672 nvme0n1 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.672 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:57.931 17:23:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.307 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.875 nvme0n1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.876 17:23:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.135 nvme0n1 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.135 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.700 nvme0n1 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.700 17:23:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.957 nvme0n1 00:23:00.957 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.957 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.957 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.957 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.957 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.957 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.958 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.524 nvme0n1 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.524 17:23:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.090 nvme0n1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.091 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.657 nvme0n1 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:02.657 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.658 17:23:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.225 nvme0n1 00:23:03.225 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.225 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.225 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.225 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.225 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.226 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.793 nvme0n1 00:23:03.793 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.793 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.793 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.793 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.793 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.793 17:23:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.793 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.793 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.793 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.793 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.051 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.618 nvme0n1 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.618 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.619 nvme0n1 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.619 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 nvme0n1 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 17:23:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.878 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 nvme0n1 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 nvme0n1 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:05.137 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.396 nvme0n1 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.396 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.655 nvme0n1 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.655 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.656 nvme0n1 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.656 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.914 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.915 17:23:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.915 nvme0n1 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:05.915 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.174 nvme0n1 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.174 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 nvme0n1 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.433 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.434 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 nvme0n1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.692 17:23:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.950 nvme0n1 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.950 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.951 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.210 nvme0n1 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.210 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.470 nvme0n1 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.470 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.728 nvme0n1 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.728 17:23:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.986 nvme0n1 00:23:07.986 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.986 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.986 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.986 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.986 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.986 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.244 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.507 nvme0n1 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.507 17:23:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.776 nvme0n1 00:23:08.776 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.776 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.776 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.035 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.293 nvme0n1 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.293 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.859 nvme0n1 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.859 17:23:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.425 nvme0n1 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:10.425 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.426 17:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.993 nvme0n1 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.993 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.251 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.252 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.252 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.252 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.252 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.252 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.252 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.818 nvme0n1 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.818 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.819 17:23:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.384 nvme0n1 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.384 17:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.950 nvme0n1 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.950 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.209 nvme0n1 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.209 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.210 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 nvme0n1 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 nvme0n1 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.469 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.728 nvme0n1 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.728 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.729 17:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.987 nvme0n1 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.987 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.988 nvme0n1 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.988 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.247 nvme0n1 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.247 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.506 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.507 nvme0n1 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.507 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.766 nvme0n1 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.766 17:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.025 nvme0n1 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.025 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.284 nvme0n1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.284 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 nvme0n1 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.543 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.544 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.544 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.544 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.544 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.803 nvme0n1 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.803 17:24:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 nvme0n1 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.062 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.321 nvme0n1 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.321 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.580 nvme0n1 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.580 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.840 17:24:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.099 nvme0n1 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.099 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.100 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.357 nvme0n1 00:23:17.357 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.357 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.357 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.357 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.357 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.616 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.875 nvme0n1 00:23:17.875 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.875 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.875 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.875 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.875 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.875 17:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.875 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.442 nvme0n1 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2I3M2FiNzJkYmJhY2NkZDQ0YjlkZDU1ODhhODg3NmO+xGRR: 00:23:18.442 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: ]] 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTEwNjI4OWMyYWU2M2ViZTU5OTUwMWU0NDg3ZjZhZGM1YTk2MTM0YmMzYWNlM2U4N2JiZGJjNmEyOWRhMTYwMm18XcY=: 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.443 17:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.010 nvme0n1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.010 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.578 nvme0n1 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGI4NGM4MDI5MmM4MGQxZjQ5MDk2NDIyYmY3ZTY4YjenEl3H: 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OWUzNjg2N2I2NTI2N2M2OGI5NzAyYzUzMzBhYmQxMjaF2jO1: 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.578 17:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.145 nvme0n1 00:23:20.145 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.145 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.145 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.145 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.145 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.145 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.404 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NmVmNjllMGQ0MjAzNWU3MjliZWIzNTU0OTA4Mjk0NGMwYWEyNTlhYzJjZDBiYjdjK28vdg==: 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: ]] 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjBmOWU2NjhjNzNkMmZjOGU5YTcwNTc1YzAwNGQyNDhDVlXB: 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.405 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.972 nvme0n1 00:23:20.972 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.972 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.972 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.972 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.972 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.972 17:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZGI2ZDMyY2E4MzgwOThlZjExOGFiODk1NmY2Y2JhNTBjYTliMzYzMmI3MzU0ZjQwYTEzYTVlNmMxMjIyM2I3Ob9Oa6o=: 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.972 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.973 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.973 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:20.973 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.973 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.540 nvme0n1 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.540 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWI1YjQwMmZjNzhkZjRkMTg1NDIzZTZhNjYxYmNkMjNkNzQ4ODY3MDM1ZTI4N2JiSRmwVQ==: 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM5N2Y3OWM3YjAzMzdjOTFhYTYwMmNlOWFiZmY4NGVjOGM5MjA1MGM0YWZlNWJjTAiHNQ==: 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.541 2024/07/24 17:24:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:21.541 request: 00:23:21.541 { 00:23:21.541 "method": "bdev_nvme_attach_controller", 00:23:21.541 "params": { 00:23:21.541 "name": "nvme0", 00:23:21.541 "trtype": "tcp", 00:23:21.541 "traddr": "10.0.0.1", 00:23:21.541 "adrfam": "ipv4", 00:23:21.541 "trsvcid": "4420", 00:23:21.541 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:21.541 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:21.541 "prchk_reftag": false, 00:23:21.541 "prchk_guard": false, 00:23:21.541 "hdgst": false, 00:23:21.541 "ddgst": false 00:23:21.541 } 00:23:21.541 } 00:23:21.541 Got JSON-RPC error response 00:23:21.541 GoRPCClient: error on JSON-RPC call 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.541 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.801 2024/07/24 17:24:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:21.801 request: 00:23:21.801 { 00:23:21.801 "method": "bdev_nvme_attach_controller", 00:23:21.801 "params": { 00:23:21.801 "name": "nvme0", 00:23:21.801 "trtype": "tcp", 00:23:21.801 "traddr": "10.0.0.1", 00:23:21.801 "adrfam": "ipv4", 00:23:21.801 "trsvcid": "4420", 00:23:21.801 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:21.801 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:21.801 "prchk_reftag": false, 00:23:21.801 "prchk_guard": false, 00:23:21.801 "hdgst": false, 00:23:21.801 "ddgst": false, 00:23:21.801 "dhchap_key": "key2" 00:23:21.801 } 00:23:21.801 } 00:23:21.801 Got JSON-RPC error response 00:23:21.801 GoRPCClient: error on JSON-RPC call 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.801 2024/07/24 17:24:07 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:21.801 request: 00:23:21.801 { 00:23:21.801 "method": "bdev_nvme_attach_controller", 00:23:21.801 "params": { 00:23:21.801 "name": "nvme0", 00:23:21.801 "trtype": "tcp", 00:23:21.801 "traddr": "10.0.0.1", 00:23:21.801 "adrfam": "ipv4", 00:23:21.801 "trsvcid": "4420", 00:23:21.801 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:21.801 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:21.801 "prchk_reftag": false, 00:23:21.801 "prchk_guard": false, 00:23:21.801 "hdgst": false, 00:23:21.801 "ddgst": false, 00:23:21.801 "dhchap_key": "key1", 00:23:21.801 "dhchap_ctrlr_key": "ckey2" 00:23:21.801 } 00:23:21.801 } 00:23:21.801 Got JSON-RPC error response 00:23:21.801 GoRPCClient: error on JSON-RPC call 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:21.801 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:21.802 rmmod nvme_tcp 00:23:21.802 rmmod nvme_fabrics 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 90923 ']' 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 90923 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 90923 ']' 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 90923 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90923 00:23:21.802 killing process with pid 90923 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90923' 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 90923 00:23:21.802 17:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 90923 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:22.061 17:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:22.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:22.998 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:22.998 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:22.998 17:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.bac /tmp/spdk.key-null.BOV /tmp/spdk.key-sha256.bNc /tmp/spdk.key-sha384.GqL /tmp/spdk.key-sha512.yjQ /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:23:22.998 17:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:23.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:23.256 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.256 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:23.515 00:23:23.516 real 0m34.014s 00:23:23.516 user 0m30.945s 00:23:23.516 sys 0m3.722s 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.516 ************************************ 00:23:23.516 END TEST nvmf_auth_host 00:23:23.516 ************************************ 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.516 ************************************ 00:23:23.516 START TEST nvmf_digest 00:23:23.516 ************************************ 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:23.516 * Looking for test storage... 00:23:23.516 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:23.516 Cannot find device "nvmf_tgt_br" 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # true 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:23.516 Cannot find device "nvmf_tgt_br2" 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # true 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:23.516 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:23.775 Cannot find device "nvmf_tgt_br" 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # true 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:23.775 Cannot find device "nvmf_tgt_br2" 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # true 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:23.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:23.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:23.775 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:23.776 17:24:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:23.776 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:23.776 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:23.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:23:23.776 00:23:23.776 --- 10.0.0.2 ping statistics --- 00:23:23.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.776 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:23:23.776 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:24.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:24.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:23:24.035 00:23:24.035 --- 10.0.0.3 ping statistics --- 00:23:24.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.035 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:24.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:23:24.035 00:23:24.035 --- 10.0.0.1 ping statistics --- 00:23:24.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.035 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:24.035 ************************************ 00:23:24.035 START TEST nvmf_digest_clean 00:23:24.035 ************************************ 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:24.035 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=92507 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 92507 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92507 ']' 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.036 17:24:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:24.036 [2024-07-24 17:24:10.116066] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:24.036 [2024-07-24 17:24:10.116177] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.036 [2024-07-24 17:24:10.259333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.295 [2024-07-24 17:24:10.382913] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.295 [2024-07-24 17:24:10.382971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.295 [2024-07-24 17:24:10.382985] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.295 [2024-07-24 17:24:10.382996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.295 [2024-07-24 17:24:10.383006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.295 [2024-07-24 17:24:10.383060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.862 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.862 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:24.862 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.862 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.862 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.121 null0 00:23:25.121 [2024-07-24 17:24:11.226207] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.121 [2024-07-24 17:24:11.250315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92557 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92557 /var/tmp/bperf.sock 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92557 ']' 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:25.121 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:25.122 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:25.122 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.122 17:24:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:25.122 [2024-07-24 17:24:11.313841] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:25.122 [2024-07-24 17:24:11.313953] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92557 ] 00:23:25.380 [2024-07-24 17:24:11.455351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.380 [2024-07-24 17:24:11.583406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.315 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.315 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:26.315 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:26.315 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:26.315 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:26.573 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:26.573 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:26.831 nvme0n1 00:23:26.831 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:26.831 17:24:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:26.831 Running I/O for 2 seconds... 00:23:29.361 00:23:29.361 Latency(us) 00:23:29.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.361 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:29.361 nvme0n1 : 2.01 20320.75 79.38 0.00 0.00 6290.45 3038.49 13047.62 00:23:29.361 =================================================================================================================== 00:23:29.361 Total : 20320.75 79.38 0.00 0.00 6290.45 3038.49 13047.62 00:23:29.361 0 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:29.361 | select(.opcode=="crc32c") 00:23:29.361 | "\(.module_name) \(.executed)"' 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92557 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92557 ']' 00:23:29.361 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92557 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92557 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:29.362 killing process with pid 92557 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92557' 00:23:29.362 Received shutdown signal, test time was about 2.000000 seconds 00:23:29.362 00:23:29.362 Latency(us) 00:23:29.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:29.362 =================================================================================================================== 00:23:29.362 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92557 00:23:29.362 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92557 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92647 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92647 /var/tmp/bperf.sock 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92647 ']' 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.621 17:24:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:29.621 [2024-07-24 17:24:15.668837] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:29.621 [2024-07-24 17:24:15.668945] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92647 ] 00:23:29.621 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:29.621 Zero copy mechanism will not be used. 00:23:29.621 [2024-07-24 17:24:15.808224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.880 [2024-07-24 17:24:15.909451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.447 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.447 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:30.447 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:30.447 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:30.447 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:30.707 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:30.707 17:24:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:31.276 nvme0n1 00:23:31.276 17:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:31.276 17:24:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:31.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:31.277 Zero copy mechanism will not be used. 00:23:31.277 Running I/O for 2 seconds... 00:23:33.183 00:23:33.183 Latency(us) 00:23:33.183 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.183 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:33.183 nvme0n1 : 2.00 8296.56 1037.07 0.00 0.00 1924.77 558.55 10426.18 00:23:33.183 =================================================================================================================== 00:23:33.183 Total : 8296.56 1037.07 0.00 0.00 1924.77 558.55 10426.18 00:23:33.183 0 00:23:33.183 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:33.183 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:33.183 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:33.183 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:33.183 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:33.183 | select(.opcode=="crc32c") 00:23:33.183 | "\(.module_name) \(.executed)"' 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92647 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92647 ']' 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92647 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92647 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:33.442 killing process with pid 92647 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92647' 00:23:33.442 Received shutdown signal, test time was about 2.000000 seconds 00:23:33.442 00:23:33.442 Latency(us) 00:23:33.442 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.442 =================================================================================================================== 00:23:33.442 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92647 00:23:33.442 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92647 00:23:33.699 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:23:33.699 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92732 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92732 /var/tmp/bperf.sock 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92732 ']' 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:33.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:33.700 17:24:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:33.700 [2024-07-24 17:24:19.923934] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:33.700 [2024-07-24 17:24:19.924058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92732 ] 00:23:33.958 [2024-07-24 17:24:20.062305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.958 [2024-07-24 17:24:20.170776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.893 17:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:34.893 17:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:34.893 17:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:34.893 17:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:34.893 17:24:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:35.151 17:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:35.151 17:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:35.410 nvme0n1 00:23:35.410 17:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:35.410 17:24:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:35.410 Running I/O for 2 seconds... 00:23:37.987 00:23:37.987 Latency(us) 00:23:37.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.987 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:37.987 nvme0n1 : 2.01 23985.97 93.70 0.00 0.00 5330.63 2219.29 9353.77 00:23:37.987 =================================================================================================================== 00:23:37.987 Total : 23985.97 93.70 0.00 0.00 5330.63 2219.29 9353.77 00:23:37.987 0 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:37.987 | select(.opcode=="crc32c") 00:23:37.987 | "\(.module_name) \(.executed)"' 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92732 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92732 ']' 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92732 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92732 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:37.987 killing process with pid 92732 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92732' 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92732 00:23:37.987 Received shutdown signal, test time was about 2.000000 seconds 00:23:37.987 00:23:37.987 Latency(us) 00:23:37.987 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.987 =================================================================================================================== 00:23:37.987 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.987 17:24:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92732 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=92828 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 92828 /var/tmp/bperf.sock 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 92828 ']' 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:37.987 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.988 17:24:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:37.988 [2024-07-24 17:24:24.211289] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:37.988 [2024-07-24 17:24:24.211366] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92828 ] 00:23:37.988 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:37.988 Zero copy mechanism will not be used. 00:23:38.245 [2024-07-24 17:24:24.343662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.245 [2024-07-24 17:24:24.443965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.179 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.179 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:39.179 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:39.180 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:39.180 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:39.438 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.438 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:39.696 nvme0n1 00:23:39.696 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:39.696 17:24:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:39.696 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:39.696 Zero copy mechanism will not be used. 00:23:39.696 Running I/O for 2 seconds... 00:23:42.228 00:23:42.228 Latency(us) 00:23:42.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.228 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:23:42.228 nvme0n1 : 2.00 6608.97 826.12 0.00 0.00 2415.40 2055.45 8638.84 00:23:42.228 =================================================================================================================== 00:23:42.228 Total : 6608.97 826.12 0.00 0.00 2415.40 2055.45 8638.84 00:23:42.228 0 00:23:42.228 17:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:42.228 17:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:42.228 17:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:42.228 17:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:42.228 17:24:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:42.228 | select(.opcode=="crc32c") 00:23:42.228 | "\(.module_name) \(.executed)"' 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 92828 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92828 ']' 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92828 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.228 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92828 00:23:42.229 killing process with pid 92828 00:23:42.229 Received shutdown signal, test time was about 2.000000 seconds 00:23:42.229 00:23:42.229 Latency(us) 00:23:42.229 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:42.229 =================================================================================================================== 00:23:42.229 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92828' 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92828 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92828 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 92507 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 92507 ']' 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 92507 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92507 00:23:42.229 killing process with pid 92507 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92507' 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 92507 00:23:42.229 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 92507 00:23:42.487 ************************************ 00:23:42.487 END TEST nvmf_digest_clean 00:23:42.487 ************************************ 00:23:42.487 00:23:42.487 real 0m18.609s 00:23:42.487 user 0m35.504s 00:23:42.487 sys 0m4.573s 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:42.487 ************************************ 00:23:42.487 START TEST nvmf_digest_error 00:23:42.487 ************************************ 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=92941 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 92941 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92941 ']' 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.487 17:24:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:42.745 [2024-07-24 17:24:28.777012] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:42.745 [2024-07-24 17:24:28.777147] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.745 [2024-07-24 17:24:28.916803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.003 [2024-07-24 17:24:29.028415] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.003 [2024-07-24 17:24:29.028494] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.003 [2024-07-24 17:24:29.028521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.003 [2024-07-24 17:24:29.028529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.003 [2024-07-24 17:24:29.028536] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.003 [2024-07-24 17:24:29.028560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.570 [2024-07-24 17:24:29.781185] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.570 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.828 null0 00:23:43.828 [2024-07-24 17:24:29.899881] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.828 [2024-07-24 17:24:29.924044] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=92985 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 92985 /var/tmp/bperf.sock 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 92985 ']' 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:43.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:43.828 17:24:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:43.828 [2024-07-24 17:24:29.991801] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:43.828 [2024-07-24 17:24:29.991906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92985 ] 00:23:44.087 [2024-07-24 17:24:30.133639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.087 [2024-07-24 17:24:30.255531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.022 17:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:45.022 17:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:45.022 17:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:45.022 17:24:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:45.022 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:45.022 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.022 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.022 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.022 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.023 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:45.281 nvme0n1 00:23:45.540 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:45.540 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.540 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:45.540 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.540 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:45.540 17:24:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:45.540 Running I/O for 2 seconds... 00:23:45.540 [2024-07-24 17:24:31.645929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.646010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.646043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.659673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.659726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.659756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.671656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.671709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.671738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.682563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.682615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.682644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.696874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.696926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.696954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.710021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.710093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.710133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.722728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.722781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.722811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.737852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.737906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.737934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.750964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.751015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.751054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.540 [2024-07-24 17:24:31.763689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.540 [2024-07-24 17:24:31.763741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.540 [2024-07-24 17:24:31.763769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.776577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.776644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-24 17:24:31.776673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.789258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.789309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-24 17:24:31.789338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.802312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.802362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-24 17:24:31.802390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.813666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.813718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-24 17:24:31.813747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.826291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.826343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-24 17:24:31.826371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.839628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.839680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.799 [2024-07-24 17:24:31.839708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.799 [2024-07-24 17:24:31.852533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.799 [2024-07-24 17:24:31.852586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.852615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.862083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.862143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.862172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.877615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.877666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.877695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.888956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.889007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.889036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.901240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.901292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.901321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.912040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.912091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.912129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.925361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.925413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.925442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.938538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.938591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:10731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.938620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.952440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.952493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.952522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.967029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.967076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.967090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.979687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.979740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.979768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:31.993169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:31.993221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:31.993250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:32.006372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:32.006424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:32.006454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.800 [2024-07-24 17:24:32.019892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:45.800 [2024-07-24 17:24:32.019945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.800 [2024-07-24 17:24:32.019974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-24 17:24:32.034718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.059 [2024-07-24 17:24:32.034774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-24 17:24:32.034789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-24 17:24:32.047133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.059 [2024-07-24 17:24:32.047172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-24 17:24:32.047187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-24 17:24:32.061964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.059 [2024-07-24 17:24:32.062034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.059 [2024-07-24 17:24:32.062063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.059 [2024-07-24 17:24:32.075151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.059 [2024-07-24 17:24:32.075192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.075206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.089145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.089224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.089254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.103962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.104016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:18600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.104046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.116936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.116990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.117021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.131454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.131523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.144907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.144961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.144990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.157496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.157550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.157580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.169668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.169722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.169752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.183814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.183868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.183897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.198673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.198725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.198755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.213260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.213299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.213313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.227095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.227148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.227162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.240442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.240483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.240497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.252043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.252124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.252139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.265176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.265228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.265258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.277419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.277472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.277502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.060 [2024-07-24 17:24:32.291651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.060 [2024-07-24 17:24:32.291706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.060 [2024-07-24 17:24:32.291720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.308802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.308855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.323301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.323340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.323354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.334822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.334876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.334905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.345877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.345929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.345958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.360751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.360804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.360834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.375669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.375721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.375750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.386903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.386954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.319 [2024-07-24 17:24:32.386983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.319 [2024-07-24 17:24:32.400123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.319 [2024-07-24 17:24:32.400186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:7380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.400216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.413737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.413791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.413821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.425451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.425508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.425537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.437885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.437938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.437967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.449740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.449791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.449820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.462061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.462155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.462170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.476636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.476687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.476716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.491442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.491495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.491523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.505729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.505780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.505808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.520297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.520381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.520393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.532178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.532228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.532242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.320 [2024-07-24 17:24:32.546218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.320 [2024-07-24 17:24:32.546257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.320 [2024-07-24 17:24:32.546271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.560366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.560432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.560460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.573063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.573139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.573153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.586078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.586136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.586165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.598637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.598687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.598715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.610191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.610253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.610282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.621869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.621919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.621947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.579 [2024-07-24 17:24:32.636190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.579 [2024-07-24 17:24:32.636240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.579 [2024-07-24 17:24:32.636268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.648676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.648727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.648755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.659614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.659664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.659691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.670264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.670315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.670344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.682449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.682498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.682526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.694796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.694848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.694876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.707648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.707698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.707727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.720507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.720557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.720585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.730816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.730867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:9111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.730896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.744875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.744942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.744969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.756870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.756920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.756949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.769569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.769619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.769647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.781582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.781634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:19015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.781662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.790482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.790531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.790560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.580 [2024-07-24 17:24:32.803390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.580 [2024-07-24 17:24:32.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.580 [2024-07-24 17:24:32.803501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.816583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.816651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.816680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.829872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.829924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.829953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.843430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.843493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.843507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.854882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.854933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.854962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.869181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.869232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.869260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.881603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.881673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.881717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.895594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.895645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.895674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.907587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.907638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.907667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.919807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.919858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.919886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.934560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.934612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.934641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.945306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.945356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.945384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.958541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.958593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.958621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.973605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.973659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.973689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:32.986678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:32.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:32.986758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:33.001354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:33.001438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:33.001482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:33.014849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:33.014914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:33.014943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:33.027923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:33.027974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:33.028003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:33.040509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.839 [2024-07-24 17:24:33.040560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.839 [2024-07-24 17:24:33.040588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.839 [2024-07-24 17:24:33.052660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.840 [2024-07-24 17:24:33.052712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.840 [2024-07-24 17:24:33.052740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:46.840 [2024-07-24 17:24:33.065132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:46.840 [2024-07-24 17:24:33.065183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.840 [2024-07-24 17:24:33.065211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.077716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.077767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.077796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.090315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.090366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.090395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.103724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.103776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.103804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.116904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.116956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.116985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.130764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.130815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.130843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.144592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.144646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.144690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.157971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.158021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.158050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.169063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.169137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.169151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.130 [2024-07-24 17:24:33.181791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.130 [2024-07-24 17:24:33.181843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.130 [2024-07-24 17:24:33.181872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.195109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.195171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.195201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.206126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.206178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.206206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.219183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.219222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.219235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.231662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.231713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.231741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.242607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.242657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.242686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.256731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.256785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.256814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.271487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.271529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.271544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.285595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.285636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.300048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.300128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.300144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.311885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.311939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.311953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.325030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.325068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.325082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.339954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.339994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.340008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.131 [2024-07-24 17:24:33.354801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.131 [2024-07-24 17:24:33.354854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.131 [2024-07-24 17:24:33.354883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.368340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.368379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.368393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.381889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.381941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.381970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.393018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.393081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.393137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.407551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.407633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.421104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.421185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.421215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.434539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.434590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.434619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.448575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.448645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.448659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.462581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.462635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.462665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.475171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.475210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.475224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.488796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.488864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.488892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.502423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.502477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.502507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.515460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.515514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.515558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.530232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.530285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.530316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.542227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.542267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.542281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.555950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.556003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.556033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.571073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.571126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.571141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.584933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.584988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.585017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.598413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.598466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.598495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.611480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.611536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.611580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.401 [2024-07-24 17:24:33.624793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x221ee30) 00:23:47.401 [2024-07-24 17:24:33.624846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.401 [2024-07-24 17:24:33.624875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:47.659 00:23:47.659 Latency(us) 00:23:47.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.659 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:47.659 nvme0n1 : 2.00 19451.41 75.98 0.00 0.00 6572.46 3395.96 18230.92 00:23:47.659 =================================================================================================================== 00:23:47.659 Total : 19451.41 75.98 0.00 0.00 6572.46 3395.96 18230.92 00:23:47.659 0 00:23:47.659 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:47.659 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:47.659 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:47.659 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:47.659 | .driver_specific 00:23:47.659 | .nvme_error 00:23:47.659 | .status_code 00:23:47.659 | .command_transient_transport_error' 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 92985 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92985 ']' 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92985 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92985 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:47.918 killing process with pid 92985 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92985' 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92985 00:23:47.918 Received shutdown signal, test time was about 2.000000 seconds 00:23:47.918 00:23:47.918 Latency(us) 00:23:47.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.918 =================================================================================================================== 00:23:47.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.918 17:24:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92985 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93070 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93070 /var/tmp/bperf.sock 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93070 ']' 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:48.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:48.177 17:24:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:48.177 [2024-07-24 17:24:34.242568] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:48.177 [2024-07-24 17:24:34.243215] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93070 ] 00:23:48.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:48.177 Zero copy mechanism will not be used. 00:23:48.177 [2024-07-24 17:24:34.378401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.436 [2024-07-24 17:24:34.483624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.003 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:49.003 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:49.003 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:49.003 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:49.261 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:49.262 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.262 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.262 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.262 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.262 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:49.520 nvme0n1 00:23:49.520 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:49.520 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:49.520 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:49.520 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:49.520 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:49.520 17:24:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:49.780 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:49.780 Zero copy mechanism will not be used. 00:23:49.780 Running I/O for 2 seconds... 00:23:49.780 [2024-07-24 17:24:35.832116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.832214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.832231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.836781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.836836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.836866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.841939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.842008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.842038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.847208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.847250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.847264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.851858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.851913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.851942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.854681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.854734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.859269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.859311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.859326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.862545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.862599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.862629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.866996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.867074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.867106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.871728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.871783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.871812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.875715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.875801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.875831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.879745] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.879797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.879827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.884073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.884138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.884168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.888404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.888457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.888486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.892468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.892526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.892540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.896235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.896286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.896315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.900437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.900507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.900536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.904077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.904172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.904203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.908160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.908220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.908233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.911817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.911869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.911897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.915328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.915445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.918372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.918424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.918453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.922594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.922646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.922674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.926375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.926427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.926455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.929799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.929851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.780 [2024-07-24 17:24:35.929879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.780 [2024-07-24 17:24:35.933649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.780 [2024-07-24 17:24:35.933685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.933712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.937701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.937753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.937782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.941247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.941299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.941327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.945691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.945742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.945771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.949370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.949422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.949450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.952818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.952869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.952897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.957455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.957506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.957534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.961699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.961751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.961779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.964962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.965013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.965041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.969032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.969085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.969125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.973873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.973925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.973952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.978752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.978804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.978832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.982175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.982244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.982272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.985851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.985903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.985931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.990780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.990832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.990861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.995468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.995540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.995569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:35.998435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:35.998505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:35.998535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:36.003973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:36.004045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:36.004058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:36.008619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:36.008691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:36.008721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:49.781 [2024-07-24 17:24:36.011906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:49.781 [2024-07-24 17:24:36.011958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:49.781 [2024-07-24 17:24:36.012003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.017093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.017188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.017219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.020834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.020886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.020915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.024445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.024498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.024527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.028844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.028897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.028926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.032428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.032480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.032509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.036810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.036864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.036893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.041520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.041573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.041602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.044822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.044875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.044904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.049503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.049555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.049583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.054949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.055020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.055034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.058996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.059060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.059074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.042 [2024-07-24 17:24:36.062619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.042 [2024-07-24 17:24:36.062672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.042 [2024-07-24 17:24:36.062700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.067467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.067534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.067562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.072328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.072396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.072425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.075961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.076029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.076057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.079800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.079850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.079878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.083129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.083182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.087176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.087216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.087230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.091932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.092002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.092047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.095517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.095568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.095596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.099845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.099898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.099926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.104699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.104751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.104779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.108915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.108966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.108994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.111914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.111965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.111993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.116391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.116443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.116471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.120554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.120606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.120634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.123793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.123846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.123875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.127957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.128009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.128038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.132894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.132975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.136449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.136500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.136528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.140786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.140866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.144677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.144730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.144758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.148019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.148072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.148100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.151807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.151859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.151887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.155364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.155460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.155488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.158760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.158816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.158845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.162639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.162690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.162717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.166454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.166506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.166549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.170146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.170196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.170225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.173258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.173310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.173338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.177163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.177214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.177242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.180934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.043 [2024-07-24 17:24:36.180986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.043 [2024-07-24 17:24:36.181013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.043 [2024-07-24 17:24:36.184494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.184545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.184574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.188343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.188394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.188422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.192327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.192380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.192408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.196352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.196404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.196432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.199903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.199955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.199983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.203209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.203279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.203309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.206961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.207012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.207040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.210798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.210849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.210878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.214330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.214382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.214410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.218378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.218430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.218459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.222036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.222088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.222143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.225190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.225269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.228994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.229045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.229074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.232568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.232619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.232646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.236337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.236389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.236417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.240182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.240243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.240272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.244446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.244498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.244526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.248498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.248549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.248577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.252613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.252665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.252694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.256594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.256645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.256673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.260900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.260957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.265087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.265163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.265192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.268702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.268753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.268781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.044 [2024-07-24 17:24:36.272285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.044 [2024-07-24 17:24:36.272334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.044 [2024-07-24 17:24:36.272362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.276959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.277010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.277038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.280916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.280968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.281012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.285162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.285199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.285228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.289032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.289069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.289097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.293200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.293239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.293268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.296870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.296908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.296937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.300698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.300735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.300764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.304847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.304884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.304914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.308400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.308438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.305 [2024-07-24 17:24:36.308467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.305 [2024-07-24 17:24:36.311860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.305 [2024-07-24 17:24:36.311897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.311926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.316277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.316331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.316360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.320473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.320512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.320541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.324693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.324746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.324776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.329854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.329893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.329923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.334066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.334130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.334146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.338588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.338624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.338653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.341722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.341758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.341787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.346237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.346288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.346319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.351759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.351797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.351827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.356683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.356723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.356753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.360013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.360048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.360077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.364131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.364176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.364205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.367413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.367467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.367497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.372664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.372704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.372735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.376211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.376250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.376265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.380764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.380802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.380832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.385374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.385428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.385442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.389120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.389185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.389201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.393562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.393598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.393628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.397985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.398022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.398050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.401914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.401951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.401981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.405921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.405958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.405988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.410125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.410160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.410190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.414410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.414462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.414491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.417772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.417808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.417837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.422429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.422467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.306 [2024-07-24 17:24:36.422497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.306 [2024-07-24 17:24:36.426861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.306 [2024-07-24 17:24:36.426899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.426928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.431036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.431109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.431124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.434115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.434149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.434179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.438343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.438381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.438410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.442772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.442810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.442839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.447191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.447393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.447553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.450531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.450564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.450594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.454998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.455036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.455130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.458710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.458747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.458777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.462865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.462902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.462932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.465983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.466020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.466049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.469590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.469626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.469655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.474288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.474324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.474354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.478256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.478292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.478322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.481571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.481607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.485619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.485656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.485686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.489708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.489746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.489775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.493599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.493666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.493695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.497566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.497602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.497632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.501178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.501214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.501243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.505063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.505125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.505140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.509004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.509041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.509070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.512160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.512194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.512222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.516389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.516425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.516454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.521117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.521153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.521182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.524083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.524145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.524159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.527793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.527830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.527859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.307 [2024-07-24 17:24:36.531901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.307 [2024-07-24 17:24:36.531937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.307 [2024-07-24 17:24:36.531966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.308 [2024-07-24 17:24:36.536176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.308 [2024-07-24 17:24:36.536382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.308 [2024-07-24 17:24:36.536416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.540141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.540347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.540566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.545783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.545955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.545989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.551092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.551284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.551415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.555109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.555297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.555425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.559081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.559246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.559264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.563526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.563564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.563594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.567294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.567335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.567350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.570802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.570839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.570869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.574590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.574627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.574656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.578593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.578630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.578660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.582788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.582828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.582874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.568 [2024-07-24 17:24:36.586919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.568 [2024-07-24 17:24:36.586959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.568 [2024-07-24 17:24:36.587005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.590205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.590243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.590256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.594181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.594248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.598008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.598045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.598075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.601900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.601954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.602000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.605795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.605833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.605863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.610375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.610415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.610445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.614060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.614123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.614137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.617667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.617704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.617734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.621932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.621970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.622000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.624881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.624930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.624960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.628960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.628998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.629027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.633181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.633219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.633248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.636563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.636600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.636630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.640660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.640698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.640727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.644958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.644997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.645027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.648560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.648598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.648627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.652419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.652458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.652487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.656017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.656054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.656084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.660599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.660637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.660666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.663874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.663912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.663941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.668274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.668312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.668342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.671614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.671651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.671680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.675638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.675674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.675703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.679124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.679162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.679176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.683042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.683131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.683147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.687593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.687631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.687661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.569 [2024-07-24 17:24:36.690546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.569 [2024-07-24 17:24:36.690582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.569 [2024-07-24 17:24:36.690612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.695389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.695457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.695501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.699562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.699600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.699629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.702653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.702689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.702719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.706938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.706976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.707021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.712022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.712061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.712091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.716240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.716278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.716307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.719216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.719255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.719286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.724152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.724222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.724251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.729147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.729216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.729245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.732427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.732462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.732491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.736300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.736339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.736368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.740429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.740467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.740511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.744787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.744824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.744853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.747531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.747567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.747597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.752063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.752124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.752154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.756790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.756828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.756857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.760669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.760705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.760735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.763995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.764032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.764061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.768256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.768294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.768324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.772174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.772211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.772241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.775450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.775486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.775515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.779323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.779361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.779406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.782896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.782932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.782962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.787180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.787219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.787233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.790249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.790285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.790314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.794141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.794187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.794215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.570 [2024-07-24 17:24:36.799187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.570 [2024-07-24 17:24:36.799228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.570 [2024-07-24 17:24:36.799242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.802738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.802775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.802804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.806483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.806522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.806551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.811227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.811268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.811283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.815615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.815651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.815680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.820332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.820384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.824271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.824308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.824353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.827999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.828036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.828066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.831254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.831294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.831308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.834692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.834727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.834756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.839712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.839751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.839781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.844946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.844986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.845015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.847813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.847849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.847877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.852631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.852667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.852695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.855990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.856027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.856056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.860275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.860314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.860328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.865598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.865649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.865677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.870627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.870665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.870694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.873990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.874027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.874056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.878267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.878304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.878333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.831 [2024-07-24 17:24:36.882473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.831 [2024-07-24 17:24:36.882509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.831 [2024-07-24 17:24:36.882539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.885362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.885398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.885427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.889703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.889740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.889769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.893546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.893582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.893611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.897378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.897416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.897460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.901800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.901838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.901868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.906489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.906527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.906556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.910309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.910347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.910377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.914353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.914392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.914422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.918571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.918610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.918640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.923278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.923318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.923332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.926948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.926985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.927015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.931650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.931688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.931718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.936413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.936452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.936483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.940238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.940287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.940316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.943900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.943937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.943965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.948617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.948791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.948824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.952955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.952993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.953023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.956472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.956509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.956538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.960095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.960155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.960170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.964366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.964417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.964448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.967525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.967578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.967607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.971775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.971813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.971842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.975758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.975796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.975825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.979206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.979245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.979259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.983348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.983416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.983446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.987947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.988000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.832 [2024-07-24 17:24:36.988030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.832 [2024-07-24 17:24:36.991215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.832 [2024-07-24 17:24:36.991368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:36.991386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:36.995508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:36.995715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:36.995859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.000329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.000540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.000681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.005352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.005546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.005690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.009192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.009231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.009261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.013463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.013501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.013530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.018567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.018606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.018635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.022060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.022113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.022128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.026228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.026266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.026295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.030335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.030373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.030402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.034394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.034433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.034463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.038360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.038400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.038430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.042532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.042570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.042599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.045999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.046036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.046066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.049890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.049928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.049957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.055215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.055256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.055271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:50.833 [2024-07-24 17:24:37.060455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:50.833 [2024-07-24 17:24:37.060499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:50.833 [2024-07-24 17:24:37.060529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.063931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.063970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.064001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.068331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.068386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.068416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.072125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.072189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.072204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.076424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.076461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.076489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.080934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.080971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.081017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.084698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.084735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.084764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.088864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.088901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.088929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.093135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.093199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.093214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.098279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.098329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.098344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.101383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.093 [2024-07-24 17:24:37.101466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.093 [2024-07-24 17:24:37.101497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.093 [2024-07-24 17:24:37.105993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.106030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.106060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.109549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.109585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.109614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.113059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.113121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.113136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.116764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.116803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.116834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.120870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.120907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.120953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.125279] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.125326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.125357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.129693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.129730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.129759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.134724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.134761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.134790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.139612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.139649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.139678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.142305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.142339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.142367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.146965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.147001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.147030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.152041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.152078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.152107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.156806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.156843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.156872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.159925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.159960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.164060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.164122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.164136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.168372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.168409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.168439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.171579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.171614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.171642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.175669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.175705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.175733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.179986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.180024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.180053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.184097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.184156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.184170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.187537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.187573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.187601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.191706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.191757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.191787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.195315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.094 [2024-07-24 17:24:37.195356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.094 [2024-07-24 17:24:37.195370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.094 [2024-07-24 17:24:37.199366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.199433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.199462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.202746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.202782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.202811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.207012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.207070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.207101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.211748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.211786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.211815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.214976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.215011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.215039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.218711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.218748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.218777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.223928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.223965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.223994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.228559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.228596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.228625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.231344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.231396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.231425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.235965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.236003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.236033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.239482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.239532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.239562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.243901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.243938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.243967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.248401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.248438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.248468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.251874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.251910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.251939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.256029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.256212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.256247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.260029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.260231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.260415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.264456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.264648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.264768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.268350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.268387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.268417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.272158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.272205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.272234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.275926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.275963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.275992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.279689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.279726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.279755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.283837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.283875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.283904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.287541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.287577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.287606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.291352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.291421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.291451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.295491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.295528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.295558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.299448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.299487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.299531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.095 [2024-07-24 17:24:37.302740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.095 [2024-07-24 17:24:37.302777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.095 [2024-07-24 17:24:37.302806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.096 [2024-07-24 17:24:37.306879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.096 [2024-07-24 17:24:37.306916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.096 [2024-07-24 17:24:37.306945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.096 [2024-07-24 17:24:37.310935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.096 [2024-07-24 17:24:37.310971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.096 [2024-07-24 17:24:37.311000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.096 [2024-07-24 17:24:37.314050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.096 [2024-07-24 17:24:37.314087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.096 [2024-07-24 17:24:37.314141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.096 [2024-07-24 17:24:37.318856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.096 [2024-07-24 17:24:37.318894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.096 [2024-07-24 17:24:37.318923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.096 [2024-07-24 17:24:37.323587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.096 [2024-07-24 17:24:37.323623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.096 [2024-07-24 17:24:37.323653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.329057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.329118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.329133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.332154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.332230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.332261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.336947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.337017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.337047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.340444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.340481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.340510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.344280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.344316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.344345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.348555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.348593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.348622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.352214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.352249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.352278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.355957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.355994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.356025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.360734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.360772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.360803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.365343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.365382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.365411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.369112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.369159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.369189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.373389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.373424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.373454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.377309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.377363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.377393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.382052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.382132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.386556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.386593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.386622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.390839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.390893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.390923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.395156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.395196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.395210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.399528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.399566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.399595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.403475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.403513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.403543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.407690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.407729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.407758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.411934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.411974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.412005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.416058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.416094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.416153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.420598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.357 [2024-07-24 17:24:37.420635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.357 [2024-07-24 17:24:37.420664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.357 [2024-07-24 17:24:37.423823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.423860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.423889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.427948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.427985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.428014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.432339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.432377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.432406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.435769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.435817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.435831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.440222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.440258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.440288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.444173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.444209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.444238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.448102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.448162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.448193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.451898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.451935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.451964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.455887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.455924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.455952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.460016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.460053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.460082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.463606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.463642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.463672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.467452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.467488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.467517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.470813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.470849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.470879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.475024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.475086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.475111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.479492] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.479543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.483221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.483261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.483275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.486948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.486984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.487014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.491141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.491180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.491195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.495341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.495411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.495443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.498948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.498983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.499012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.503249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.503288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.503303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.507426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.507462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.507491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.510283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.510319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.510349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.514021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.514059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.514088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.517958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.518011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.518040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.358 [2024-07-24 17:24:37.521659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.358 [2024-07-24 17:24:37.521699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.358 [2024-07-24 17:24:37.521729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.525742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.525789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.525818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.529208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.529244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.529273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.532911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.532948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.532978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.536489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.536540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.536568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.540231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.540269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.540298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.544467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.544518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.544548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.548036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.548072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.548101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.552679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.552717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.552747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.556719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.556757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.556786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.559887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.560081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.560211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.564655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.564847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.564988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.569581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.569754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.569787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.573365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.573538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.573717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.578108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.578294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.578461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.582897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.583153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.583296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.359 [2024-07-24 17:24:37.588556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.359 [2024-07-24 17:24:37.588609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.359 [2024-07-24 17:24:37.588622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.619 [2024-07-24 17:24:37.591768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.619 [2024-07-24 17:24:37.591802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-07-24 17:24:37.591831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.619 [2024-07-24 17:24:37.595795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.619 [2024-07-24 17:24:37.595832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-07-24 17:24:37.595861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.619 [2024-07-24 17:24:37.600082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.619 [2024-07-24 17:24:37.600139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-07-24 17:24:37.600153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.619 [2024-07-24 17:24:37.604904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.619 [2024-07-24 17:24:37.604940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-07-24 17:24:37.604969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.619 [2024-07-24 17:24:37.609590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.619 [2024-07-24 17:24:37.609626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.619 [2024-07-24 17:24:37.609654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.619 [2024-07-24 17:24:37.612310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.612343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.612372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.617241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.617279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.617323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.621135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.621212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.621228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.625468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.625549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.629896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.629934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.629962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.634590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.634627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.634656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.638093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.638153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.638168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.642488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.642540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.642569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.647038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.647112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.647128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.650671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.650706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.650735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.654508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.654546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.654575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.658839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.658876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.658905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.662849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.663036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.663094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.666786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.666991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.667245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.671316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.671547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.671699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.675100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.675265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.675284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.679381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.679481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.679510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.683869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.683906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.683936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.687204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.687243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.687258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.691673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.691709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.691739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.695775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.695811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.695841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.699332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.699371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.699416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.703317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.703358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.703372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.706935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.706971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.707000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.710186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.710222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.710250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.714682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.714720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.714750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.717955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.620 [2024-07-24 17:24:37.717991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.620 [2024-07-24 17:24:37.718020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.620 [2024-07-24 17:24:37.721998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.722034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.722063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.726230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.726266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.726296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.730413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.730449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.730478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.734490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.734526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.734555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.738647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.738683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.738719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.742266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.742302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.746062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.746126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.746140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.750042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.750081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.750123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.753394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.753435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.753449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.757813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.757853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.757883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.762430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.762467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.762496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.766935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.766974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.767021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.770969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.771041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.771098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.774722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.774759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.774788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.778752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.778789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.778819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.782881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.782919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.782948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.787156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.787197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.787211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.790517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.790554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.790583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.794313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.794351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.794380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.798221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.798258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.798287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.802172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.802209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.802238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.805831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.805869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.805899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.809999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.810036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.810066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.814051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.814089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.814128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.816913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.816950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.816979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:51.621 [2024-07-24 17:24:37.820704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xff1fd0) 00:23:51.621 [2024-07-24 17:24:37.820741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:51.621 [2024-07-24 17:24:37.820770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:51.621 00:23:51.621 Latency(us) 00:23:51.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.621 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:23:51.621 nvme0n1 : 2.00 7636.49 954.56 0.00 0.00 2091.02 547.37 9830.40 00:23:51.621 =================================================================================================================== 00:23:51.622 Total : 7636.49 954.56 0.00 0.00 2091.02 547.37 9830.40 00:23:51.622 0 00:23:51.622 17:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:51.622 17:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:51.622 17:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:51.622 | .driver_specific 00:23:51.622 | .nvme_error 00:23:51.622 | .status_code 00:23:51.622 | .command_transient_transport_error' 00:23:51.622 17:24:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:51.880 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 493 > 0 )) 00:23:51.880 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93070 00:23:51.880 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93070 ']' 00:23:51.880 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93070 00:23:51.880 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93070 00:23:52.139 killing process with pid 93070 00:23:52.139 Received shutdown signal, test time was about 2.000000 seconds 00:23:52.139 00:23:52.139 Latency(us) 00:23:52.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.139 =================================================================================================================== 00:23:52.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93070' 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93070 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93070 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93160 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93160 /var/tmp/bperf.sock 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93160 ']' 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:52.139 17:24:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:52.409 [2024-07-24 17:24:38.416662] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:52.409 [2024-07-24 17:24:38.417007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93160 ] 00:23:52.409 [2024-07-24 17:24:38.559566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.675 [2024-07-24 17:24:38.670386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.242 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:53.242 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:53.243 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:53.243 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:53.501 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:53.501 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.501 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:53.501 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.501 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.501 17:24:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:53.759 nvme0n1 00:23:54.017 17:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:23:54.017 17:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.017 17:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:54.017 17:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.017 17:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:54.017 17:24:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.017 Running I/O for 2 seconds... 00:23:54.017 [2024-07-24 17:24:40.129452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ee5c8 00:23:54.017 [2024-07-24 17:24:40.130459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.130514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.142995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e95a0 00:23:54.017 [2024-07-24 17:24:40.144477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.144544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.154157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ebb98 00:23:54.017 [2024-07-24 17:24:40.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.155623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.165005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f9b30 00:23:54.017 [2024-07-24 17:24:40.166199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:9672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.166253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.175529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190edd58 00:23:54.017 [2024-07-24 17:24:40.176514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.176562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.188613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ec408 00:23:54.017 [2024-07-24 17:24:40.190051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.190087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.200170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5be8 00:23:54.017 [2024-07-24 17:24:40.201308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.201346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.212643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fe2e8 00:23:54.017 [2024-07-24 17:24:40.213828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.213877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.224184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eaab8 00:23:54.017 [2024-07-24 17:24:40.225206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.225247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.234873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6fa8 00:23:54.017 [2024-07-24 17:24:40.235752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.017 [2024-07-24 17:24:40.235785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:54.017 [2024-07-24 17:24:40.247745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ea248 00:23:54.018 [2024-07-24 17:24:40.249616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.018 [2024-07-24 17:24:40.249650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.258690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f20d8 00:23:54.277 [2024-07-24 17:24:40.259994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.260029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.269030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f7538 00:23:54.277 [2024-07-24 17:24:40.270779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:22560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.270813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.281623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e7818 00:23:54.277 [2024-07-24 17:24:40.282617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.282648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.291184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f2948 00:23:54.277 [2024-07-24 17:24:40.292284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.292315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.301819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f81e0 00:23:54.277 [2024-07-24 17:24:40.302490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.302537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.315104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e4140 00:23:54.277 [2024-07-24 17:24:40.316944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.316975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.322873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ef270 00:23:54.277 [2024-07-24 17:24:40.323951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.323983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.333861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ee190 00:23:54.277 [2024-07-24 17:24:40.334852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.334883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.345022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fef90 00:23:54.277 [2024-07-24 17:24:40.345772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.345819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.354652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eb760 00:23:54.277 [2024-07-24 17:24:40.355585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.355616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.367488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e1f80 00:23:54.277 [2024-07-24 17:24:40.368928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.368961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.378397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3498 00:23:54.277 [2024-07-24 17:24:40.379912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.379944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.389872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f2948 00:23:54.277 [2024-07-24 17:24:40.391300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.391335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.400331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190dece0 00:23:54.277 [2024-07-24 17:24:40.401521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.401552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.410559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ea680 00:23:54.277 [2024-07-24 17:24:40.411655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.411686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.421050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f7538 00:23:54.277 [2024-07-24 17:24:40.421946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.421978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.433234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df550 00:23:54.277 [2024-07-24 17:24:40.434560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.434592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.443467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6300 00:23:54.277 [2024-07-24 17:24:40.444564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.444597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.454093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e12d8 00:23:54.277 [2024-07-24 17:24:40.455250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.455286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.465910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e7c50 00:23:54.277 [2024-07-24 17:24:40.467122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.467155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.477724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df550 00:23:54.277 [2024-07-24 17:24:40.478715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.478747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.488988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f1430 00:23:54.277 [2024-07-24 17:24:40.489828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.489861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:54.277 [2024-07-24 17:24:40.503805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f1ca0 00:23:54.277 [2024-07-24 17:24:40.505282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.277 [2024-07-24 17:24:40.505318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:54.537 [2024-07-24 17:24:40.514697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f6cc8 00:23:54.537 [2024-07-24 17:24:40.516552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.537 [2024-07-24 17:24:40.516586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.537 [2024-07-24 17:24:40.527090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6738 00:23:54.537 [2024-07-24 17:24:40.528066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.537 [2024-07-24 17:24:40.528124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.537 [2024-07-24 17:24:40.538360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f20d8 00:23:54.537 [2024-07-24 17:24:40.539228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.537 [2024-07-24 17:24:40.539265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:54.537 [2024-07-24 17:24:40.549208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e1710 00:23:54.537 [2024-07-24 17:24:40.550346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.537 [2024-07-24 17:24:40.550378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:54.537 [2024-07-24 17:24:40.560018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e4140 00:23:54.537 [2024-07-24 17:24:40.561016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.537 [2024-07-24 17:24:40.561048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:54.537 [2024-07-24 17:24:40.571234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3d08 00:23:54.537 [2024-07-24 17:24:40.572353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.537 [2024-07-24 17:24:40.572386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.584607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e7c50 00:23:54.538 [2024-07-24 17:24:40.586307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.586340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.595963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6fa8 00:23:54.538 [2024-07-24 17:24:40.597722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.597754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.605074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f6020 00:23:54.538 [2024-07-24 17:24:40.606281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.606313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.616836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e8088 00:23:54.538 [2024-07-24 17:24:40.618096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.618140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.627730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f92c0 00:23:54.538 [2024-07-24 17:24:40.629001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.629034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.639673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fe720 00:23:54.538 [2024-07-24 17:24:40.640961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.641025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.651298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e01f8 00:23:54.538 [2024-07-24 17:24:40.652499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.652531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.662662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e38d0 00:23:54.538 [2024-07-24 17:24:40.663666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.663699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.674046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ef6a8 00:23:54.538 [2024-07-24 17:24:40.674894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.674926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.687923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eff18 00:23:54.538 [2024-07-24 17:24:40.689392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.689423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.699435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e0630 00:23:54.538 [2024-07-24 17:24:40.700485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.700518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.709885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e27f0 00:23:54.538 [2024-07-24 17:24:40.711674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.711707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.719775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fa3a0 00:23:54.538 [2024-07-24 17:24:40.720651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.720682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.733747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ed4e8 00:23:54.538 [2024-07-24 17:24:40.735161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.735196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.743891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3060 00:23:54.538 [2024-07-24 17:24:40.745035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.745068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.754917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e1b48 00:23:54.538 [2024-07-24 17:24:40.756191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.756256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:54.538 [2024-07-24 17:24:40.766206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eaab8 00:23:54.538 [2024-07-24 17:24:40.767575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.538 [2024-07-24 17:24:40.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.777999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ed4e8 00:23:54.801 [2024-07-24 17:24:40.779043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.779111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.788517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fe2e8 00:23:54.801 [2024-07-24 17:24:40.789475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.789508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.802522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fc560 00:23:54.801 [2024-07-24 17:24:40.803624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.803658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.813032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f4f40 00:23:54.801 [2024-07-24 17:24:40.813987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.814022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.823642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ec840 00:23:54.801 [2024-07-24 17:24:40.824368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.824400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.834526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3d08 00:23:54.801 [2024-07-24 17:24:40.835617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.835649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.845074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f0788 00:23:54.801 [2024-07-24 17:24:40.846005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.846036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.855810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e1f80 00:23:54.801 [2024-07-24 17:24:40.856555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.856586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.868289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f0788 00:23:54.801 [2024-07-24 17:24:40.869684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.869715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.879633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fc128 00:23:54.801 [2024-07-24 17:24:40.880535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.880568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.890271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6300 00:23:54.801 [2024-07-24 17:24:40.891035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.891133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.901280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df988 00:23:54.801 [2024-07-24 17:24:40.901887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:23703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.901936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.912710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fac10 00:23:54.801 [2024-07-24 17:24:40.913676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.913708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.922928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190de8a8 00:23:54.801 [2024-07-24 17:24:40.923734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.923765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.936142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e2c28 00:23:54.801 [2024-07-24 17:24:40.937465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.801 [2024-07-24 17:24:40.937499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:54.801 [2024-07-24 17:24:40.946852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f6890 00:23:54.802 [2024-07-24 17:24:40.948293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:40.948326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:40.958412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fc128 00:23:54.802 [2024-07-24 17:24:40.959804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:40.959835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:40.969030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3498 00:23:54.802 [2024-07-24 17:24:40.970320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:40.970357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:40.979666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fa7d8 00:23:54.802 [2024-07-24 17:24:40.980737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:40.980768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:40.990149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f46d0 00:23:54.802 [2024-07-24 17:24:40.991129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:40.991165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:41.001192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6fa8 00:23:54.802 [2024-07-24 17:24:41.002164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:41.002209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:41.014439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ea680 00:23:54.802 [2024-07-24 17:24:41.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:41.016069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:54.802 [2024-07-24 17:24:41.024807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f9b30 00:23:54.802 [2024-07-24 17:24:41.026025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:54.802 [2024-07-24 17:24:41.026074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.038760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e7818 00:23:55.062 [2024-07-24 17:24:41.039979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.040015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.049719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fcdd0 00:23:55.062 [2024-07-24 17:24:41.050756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.050804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.060987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e7c50 00:23:55.062 [2024-07-24 17:24:41.061863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.061897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.073762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e9e10 00:23:55.062 [2024-07-24 17:24:41.074848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.074883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.085198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fbcf0 00:23:55.062 [2024-07-24 17:24:41.086018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.086108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.096002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eee38 00:23:55.062 [2024-07-24 17:24:41.096754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.096788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.108403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e88f8 00:23:55.062 [2024-07-24 17:24:41.109885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.109916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.118824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ee5c8 00:23:55.062 [2024-07-24 17:24:41.120206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.120236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.128879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f57b0 00:23:55.062 [2024-07-24 17:24:41.130011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.062 [2024-07-24 17:24:41.130042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:55.062 [2024-07-24 17:24:41.139434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f8618 00:23:55.063 [2024-07-24 17:24:41.140637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.140668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.150965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190efae0 00:23:55.063 [2024-07-24 17:24:41.152213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.152245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.164796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ebfd0 00:23:55.063 [2024-07-24 17:24:41.166488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.166523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.175805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ec408 00:23:55.063 [2024-07-24 17:24:41.177228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.177263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.187584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f0788 00:23:55.063 [2024-07-24 17:24:41.189021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.189071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.198989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e5ec8 00:23:55.063 [2024-07-24 17:24:41.200016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.200066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.209476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5be8 00:23:55.063 [2024-07-24 17:24:41.210331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.210379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.219701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3060 00:23:55.063 [2024-07-24 17:24:41.220365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.220413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.231879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eea00 00:23:55.063 [2024-07-24 17:24:41.233307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.233355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.242028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eaab8 00:23:55.063 [2024-07-24 17:24:41.243370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.243436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.253879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fdeb0 00:23:55.063 [2024-07-24 17:24:41.255718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.255765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.262498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e5a90 00:23:55.063 [2024-07-24 17:24:41.263706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.263753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.272742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6738 00:23:55.063 [2024-07-24 17:24:41.273730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.273776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.282866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fe720 00:23:55.063 [2024-07-24 17:24:41.283744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.063 [2024-07-24 17:24:41.283792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:55.063 [2024-07-24 17:24:41.295938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fdeb0 00:23:55.322 [2024-07-24 17:24:41.297051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.297088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.305932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f0bc0 00:23:55.322 [2024-07-24 17:24:41.307105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.307154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.318579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f1430 00:23:55.322 [2024-07-24 17:24:41.320368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.320418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.329544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ef270 00:23:55.322 [2024-07-24 17:24:41.331305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.331356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.338448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e49b0 00:23:55.322 [2024-07-24 17:24:41.339438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.339486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.349434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190dece0 00:23:55.322 [2024-07-24 17:24:41.350733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.350785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.360665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e7818 00:23:55.322 [2024-07-24 17:24:41.362111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.362165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.370624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e9168 00:23:55.322 [2024-07-24 17:24:41.371586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.371633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.381671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ddc00 00:23:55.322 [2024-07-24 17:24:41.382552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.382599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.394448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190efae0 00:23:55.322 [2024-07-24 17:24:41.395520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.395568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.406015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fd208 00:23:55.322 [2024-07-24 17:24:41.407318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.407354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.417757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190dece0 00:23:55.322 [2024-07-24 17:24:41.419180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:23019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.322 [2024-07-24 17:24:41.419214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:55.322 [2024-07-24 17:24:41.432124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e9e10 00:23:55.323 [2024-07-24 17:24:41.434239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.434291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.440550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e8088 00:23:55.323 [2024-07-24 17:24:41.441516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.441564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.452988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f35f0 00:23:55.323 [2024-07-24 17:24:41.454096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.454189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.464208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5be8 00:23:55.323 [2024-07-24 17:24:41.465845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.465894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.474933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e88f8 00:23:55.323 [2024-07-24 17:24:41.476187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.476247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.486581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e0630 00:23:55.323 [2024-07-24 17:24:41.487899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.487948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.500936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e01f8 00:23:55.323 [2024-07-24 17:24:41.502983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:4243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.503030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.509448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fa3a0 00:23:55.323 [2024-07-24 17:24:41.510316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.510351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.523771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f4f40 00:23:55.323 [2024-07-24 17:24:41.525440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.525488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.533926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5be8 00:23:55.323 [2024-07-24 17:24:41.535679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.535726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.542826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190de8a8 00:23:55.323 [2024-07-24 17:24:41.543848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.543895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:55.323 [2024-07-24 17:24:41.553355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ed0b0 00:23:55.323 [2024-07-24 17:24:41.554256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.323 [2024-07-24 17:24:41.554307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:55.582 [2024-07-24 17:24:41.566771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190feb58 00:23:55.583 [2024-07-24 17:24:41.567842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.567891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.577046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5be8 00:23:55.583 [2024-07-24 17:24:41.577945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.577993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.587438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190efae0 00:23:55.583 [2024-07-24 17:24:41.588163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.588221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.597080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6738 00:23:55.583 [2024-07-24 17:24:41.597957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.598021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.608072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5378 00:23:55.583 [2024-07-24 17:24:41.608950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.608996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.621532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f5be8 00:23:55.583 [2024-07-24 17:24:41.623286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.623338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.632443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fa3a0 00:23:55.583 [2024-07-24 17:24:41.633725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.633773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.643498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df118 00:23:55.583 [2024-07-24 17:24:41.644743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.644791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.654150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e9e10 00:23:55.583 [2024-07-24 17:24:41.655297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:23243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.655333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.665838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eea00 00:23:55.583 [2024-07-24 17:24:41.666644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.666692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.679230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e6300 00:23:55.583 [2024-07-24 17:24:41.680699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.680764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.690666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ef270 00:23:55.583 [2024-07-24 17:24:41.692025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.692072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.701503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e5ec8 00:23:55.583 [2024-07-24 17:24:41.702760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.702808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.714830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ed920 00:23:55.583 [2024-07-24 17:24:41.716716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.722509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e3d08 00:23:55.583 [2024-07-24 17:24:41.723620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.723666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.735631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e9168 00:23:55.583 [2024-07-24 17:24:41.737121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.737175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.745826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e9168 00:23:55.583 [2024-07-24 17:24:41.747241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.747291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.756895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fda78 00:23:55.583 [2024-07-24 17:24:41.758412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.758459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.766872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df550 00:23:55.583 [2024-07-24 17:24:41.768049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.768122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.777579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190eff18 00:23:55.583 [2024-07-24 17:24:41.778826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.778873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.788832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ea248 00:23:55.583 [2024-07-24 17:24:41.790035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.790082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.799160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fda78 00:23:55.583 [2024-07-24 17:24:41.800315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.800362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.583 [2024-07-24 17:24:41.812029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ee5c8 00:23:55.583 [2024-07-24 17:24:41.813731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.583 [2024-07-24 17:24:41.813779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.823843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f4298 00:23:55.843 [2024-07-24 17:24:41.825504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.825580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.832768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fd208 00:23:55.843 [2024-07-24 17:24:41.833902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.833950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.844647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f1ca0 00:23:55.843 [2024-07-24 17:24:41.845437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.845487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.856025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ff3c8 00:23:55.843 [2024-07-24 17:24:41.857097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:21162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.857167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.868146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e38d0 00:23:55.843 [2024-07-24 17:24:41.869740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.869787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.878649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fc128 00:23:55.843 [2024-07-24 17:24:41.880118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.880191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.888472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fb8b8 00:23:55.843 [2024-07-24 17:24:41.889366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.889413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.901020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f8618 00:23:55.843 [2024-07-24 17:24:41.902537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.902586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.911699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fc128 00:23:55.843 [2024-07-24 17:24:41.912906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.912956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.923317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f3a28 00:23:55.843 [2024-07-24 17:24:41.924707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.924755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.936735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f35f0 00:23:55.843 [2024-07-24 17:24:41.938523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.938571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.944388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ed4e8 00:23:55.843 [2024-07-24 17:24:41.945174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.945228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.957028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e49b0 00:23:55.843 [2024-07-24 17:24:41.958535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.958581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.967837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ddc00 00:23:55.843 [2024-07-24 17:24:41.969397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.969444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.976583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e95a0 00:23:55.843 [2024-07-24 17:24:41.977560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.977606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.986858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f3a28 00:23:55.843 [2024-07-24 17:24:41.987702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:41.987762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:41.999425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df550 00:23:55.843 [2024-07-24 17:24:42.000834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.000881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.010185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f3e60 00:23:55.843 [2024-07-24 17:24:42.011211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.011261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.020178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190df988 00:23:55.843 [2024-07-24 17:24:42.021880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.021927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.029492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e38d0 00:23:55.843 [2024-07-24 17:24:42.030362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.030410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.042152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190e0a68 00:23:55.843 [2024-07-24 17:24:42.043651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.043701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.053070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f6890 00:23:55.843 [2024-07-24 17:24:42.054551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.054598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.065108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fd208 00:23:55.843 [2024-07-24 17:24:42.066952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.067000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:55.843 [2024-07-24 17:24:42.072945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f3a28 00:23:55.843 [2024-07-24 17:24:42.073894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.843 [2024-07-24 17:24:42.073941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:56.102 [2024-07-24 17:24:42.084207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190fdeb0 00:23:56.102 [2024-07-24 17:24:42.085116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.102 [2024-07-24 17:24:42.085196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:56.102 [2024-07-24 17:24:42.096097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190ed920 00:23:56.102 [2024-07-24 17:24:42.097047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.102 [2024-07-24 17:24:42.097120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:56.102 [2024-07-24 17:24:42.109798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c9320) with pdu=0x2000190f3e60 00:23:56.102 [2024-07-24 17:24:42.110816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:56.102 [2024-07-24 17:24:42.110850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:56.102 00:23:56.102 Latency(us) 00:23:56.102 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.102 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:23:56.102 nvme0n1 : 2.01 22592.48 88.25 0.00 0.00 5656.38 2383.13 14775.39 00:23:56.102 =================================================================================================================== 00:23:56.102 Total : 22592.48 88.25 0.00 0.00 5656.38 2383.13 14775.39 00:23:56.102 0 00:23:56.102 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:23:56.102 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:23:56.102 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:23:56.102 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:23:56.102 | .driver_specific 00:23:56.102 | .nvme_error 00:23:56.102 | .status_code 00:23:56.102 | .command_transient_transport_error' 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 177 > 0 )) 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93160 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93160 ']' 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93160 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93160 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:56.361 killing process with pid 93160 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93160' 00:23:56.361 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.361 00:23:56.361 Latency(us) 00:23:56.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.361 =================================================================================================================== 00:23:56.361 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.361 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93160 00:23:56.362 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93160 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93251 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93251 /var/tmp/bperf.sock 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 93251 ']' 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.620 17:24:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:56.620 [2024-07-24 17:24:42.727617] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:23:56.620 [2024-07-24 17:24:42.727759] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93251 ] 00:23:56.620 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:56.620 Zero copy mechanism will not be used. 00:23:56.878 [2024-07-24 17:24:42.865940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.878 [2024-07-24 17:24:42.976978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:57.812 17:24:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.070 nvme0n1 00:23:58.070 17:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:23:58.070 17:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.070 17:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:23:58.070 17:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:58.070 17:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:23:58.070 17:24:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.328 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:58.328 Zero copy mechanism will not be used. 00:23:58.328 Running I/O for 2 seconds... 00:23:58.328 [2024-07-24 17:24:44.384026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.384391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.384431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.389660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.389961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.390012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.395276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.395625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.395656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.400589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.400885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.400922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.405820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.406152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.406202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.410934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.411299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.411337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.416042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.328 [2024-07-24 17:24:44.416385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.328 [2024-07-24 17:24:44.416421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.328 [2024-07-24 17:24:44.421032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.421383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.421420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.425934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.426263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.426300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.430817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.431185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.431224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.435736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.436032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.436068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.440699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.441013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.441050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.445630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.445928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.445965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.450524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.450831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.450867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.455499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.455807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.455849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.460401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.460711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.460747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.465289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.465621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.465656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.470099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.470441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.470476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.475020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.475389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.475439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.480016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.480333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.480369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.485235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.485611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.485648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.490751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.491128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.491161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.495698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.496007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.496052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.500606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.500913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.500948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.505610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.505916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.505951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.510519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.510843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.510879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.515437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.515750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.515786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.520294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.520604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.520640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.525160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.525455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.525492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.529983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.530301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.530337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.534785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.535147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.535181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.539783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.540102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.540150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.544739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.545093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.545138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.549938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.329 [2024-07-24 17:24:44.550260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.329 [2024-07-24 17:24:44.550294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.329 [2024-07-24 17:24:44.555048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.330 [2024-07-24 17:24:44.555391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.330 [2024-07-24 17:24:44.555427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.330 [2024-07-24 17:24:44.560483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.330 [2024-07-24 17:24:44.560772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.330 [2024-07-24 17:24:44.560809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.565914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.589 [2024-07-24 17:24:44.566247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.589 [2024-07-24 17:24:44.566301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.571292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.589 [2024-07-24 17:24:44.571616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.589 [2024-07-24 17:24:44.571652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.576435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.589 [2024-07-24 17:24:44.576794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.589 [2024-07-24 17:24:44.576830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.581616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.589 [2024-07-24 17:24:44.581927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.589 [2024-07-24 17:24:44.581963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.586744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.589 [2024-07-24 17:24:44.587041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.589 [2024-07-24 17:24:44.587116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.591931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.589 [2024-07-24 17:24:44.592291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.589 [2024-07-24 17:24:44.592327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.589 [2024-07-24 17:24:44.597581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.597928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.597979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.602627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.602936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.602974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.607653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.607964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.608000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.612528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.612838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.612874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.617518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.617850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.622330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.622640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.622675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.627261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.627601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.627638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.632349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.632695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.632730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.638064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.638414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.638449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.643622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.643976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.644015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.649253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.649603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.649640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.654593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.654935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.654973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.659986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.660319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.660355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.665343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.665705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.665742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.670715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.671064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.671110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.676080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.676427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.676464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.681329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.681704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.681739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.686822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.687200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.687225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.692237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.692538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.692563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.697713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.698061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.698090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.703124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.703432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.703473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.708344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.708669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.708695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.713704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.714017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.714046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.718917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.719254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.719278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.724464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.724744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.724770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.729634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.729909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.729934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.590 [2024-07-24 17:24:44.734863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.590 [2024-07-24 17:24:44.735206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.590 [2024-07-24 17:24:44.735234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.740053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.740489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.740529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.745471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.745786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.745812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.750647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.750920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.750947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.755612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.755885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.755910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.760519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.760792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.760818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.765396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.765675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.765702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.770226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.770498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.770524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.775000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.775362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.775425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.780010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.780340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.780385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.785007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.785327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.785358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.789987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.790316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.790346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.794977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.795329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.795360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.800042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.800382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.800413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.805045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.805405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.805437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.810066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.810404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.810437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.815005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.815391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.815454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.591 [2024-07-24 17:24:44.820388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.591 [2024-07-24 17:24:44.820714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.591 [2024-07-24 17:24:44.820755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.825735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.826010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.826036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.830847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.831166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.831200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.835883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.836196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.836221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.840847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.841138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.841175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.845880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.846202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.846229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.851040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.851398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.851424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.856571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.856906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.856935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.861755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.862053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.862080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.866653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.866925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.866950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.871676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.871956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.871982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.851 [2024-07-24 17:24:44.876669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.851 [2024-07-24 17:24:44.876956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.851 [2024-07-24 17:24:44.876982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.881710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.882004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.882032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.886633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.886903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.886931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.892001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.892384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.897549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.897822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.897849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.902551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.902829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.902856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.907542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.907814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.907839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.912696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.912983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.913010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.917638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.917912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.917937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.922531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.922800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.922826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.927440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.927713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.927739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.932400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.932671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.932697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.937203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.937475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.937501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.942104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.942408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.942434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.946928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.947255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.947288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.951951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.952237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.952262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.956798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.957070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.957107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.961825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.962123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.962161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.966648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.966920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.966946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.971619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.971891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.971918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.976606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.976884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.976910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.981601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.981874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.981900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.986469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.986743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.986770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.991514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.991823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.991850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:44.997030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:44.997368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:44.997413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:45.002714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:45.003073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:45.003110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:45.008451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:45.008717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.852 [2024-07-24 17:24:45.008742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.852 [2024-07-24 17:24:45.013802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.852 [2024-07-24 17:24:45.014131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.019232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.019543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.019569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.024524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.024806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.024832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.029627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.029908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.029934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.034864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.035233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.035261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.040321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.040630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.040656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.045762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.046093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.046131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.051303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.051605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.051632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.056611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.056888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.056915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.061830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.062129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.062167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.066925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.067295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.067323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.072138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.072465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.072491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.077265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.077546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.077571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:58.853 [2024-07-24 17:24:45.082527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:58.853 [2024-07-24 17:24:45.082854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:58.853 [2024-07-24 17:24:45.082898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.087819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.088131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.088169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.093178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.093459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.093485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.098158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.098454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.098479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.103262] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.103584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.103610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.108225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.108547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.108574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.113760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.114093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.114130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.118735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.119016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.119042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.123786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.124066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.124092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.128832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.129131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.129171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.134414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.134694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.134720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.139599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.139887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.139915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.144700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.145016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.145038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.150128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.150480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.150524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.155727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.156010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.156037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.160808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.161120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.161156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.165863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.166156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.166182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.170861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.171200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.171227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.175916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.176221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.176247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.181303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.181598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.186328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.186608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.186634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.191338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.191656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.191682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.196392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.196673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.196701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.201362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.201656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.201683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.206360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.206642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.206669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.211476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.211764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.113 [2024-07-24 17:24:45.211791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.113 [2024-07-24 17:24:45.216651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.113 [2024-07-24 17:24:45.216936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.216962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.221831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.222102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.222137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.226672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.226943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.226969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.231739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.232036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.232063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.237066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.237406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.237438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.242331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.242686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.242713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.247612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.247896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.247924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.253036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.253352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.253410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.258485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.258844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.258873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.264116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.264479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.264506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.269474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.269747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.269773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.274776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.275141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.275169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.280080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.280457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.280520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.285404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.285684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.285711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.290296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.290568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.290594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.295051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.295393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.295440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.300031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.300359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.300405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.305237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.305509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.305534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.310005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.310338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.314853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.315179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.315206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.320151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.320492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.320531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.325351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.325649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.325675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.330491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.330771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.330798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.335928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.336268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.336295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.114 [2024-07-24 17:24:45.341369] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.114 [2024-07-24 17:24:45.341694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.114 [2024-07-24 17:24:45.341736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.374 [2024-07-24 17:24:45.347006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.374 [2024-07-24 17:24:45.347359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.347387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.352483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.352763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.352789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.357539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.357819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.357846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.362539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.362818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.362844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.367705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.367984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.368010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.372669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.372961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.373004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.377671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.377952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.377978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.382626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.382904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.382931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.387852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.388144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.388177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.392842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.393122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.393159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.397781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.398079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.398116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.402770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.403049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.403129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.408095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.408454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.408486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.413504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.413799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.413825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.418799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.419199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.419226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.424304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.424622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.424654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.429407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.429732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.429764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.434489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.434815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.434846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.439568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.439885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.439917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.444641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.444959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.444992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.449582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.449901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.449933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.454666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.454982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.455013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.459794] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.460126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.460168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.464945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.465276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.465307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.470106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.470480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.470512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.475301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.475637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.475668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.480728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.375 [2024-07-24 17:24:45.481039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.375 [2024-07-24 17:24:45.481070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.375 [2024-07-24 17:24:45.485789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.486134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.486175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.490762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.491142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.491167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.495859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.496168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.496190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.500854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.501175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.501223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.505891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.506235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.506267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.511032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.511363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.511395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.516032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.516397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.516429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.521399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.521724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.521754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.526603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.526918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.526966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.531806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.532147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.532190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.536826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.537144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.537168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.541850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.542205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.542236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.546843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.547192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.547224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.551796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.552109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.552148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.556728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.557030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.557060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.561717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.562050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.562080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.566894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.567262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.567295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.572172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.572485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.572517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.577580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.577870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.577896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.583012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.583366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.583401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.588609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.588912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.588942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.593852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.594219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.594250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.598974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.599329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.599361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.376 [2024-07-24 17:24:45.604364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.376 [2024-07-24 17:24:45.604704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.376 [2024-07-24 17:24:45.604734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.636 [2024-07-24 17:24:45.609676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.636 [2024-07-24 17:24:45.609993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.636 [2024-07-24 17:24:45.610024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.636 [2024-07-24 17:24:45.614842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.636 [2024-07-24 17:24:45.615176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.636 [2024-07-24 17:24:45.615209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.636 [2024-07-24 17:24:45.619799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.636 [2024-07-24 17:24:45.620113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.636 [2024-07-24 17:24:45.620159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.636 [2024-07-24 17:24:45.624619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.636 [2024-07-24 17:24:45.624919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.636 [2024-07-24 17:24:45.624950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.636 [2024-07-24 17:24:45.629555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.636 [2024-07-24 17:24:45.629856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.636 [2024-07-24 17:24:45.629887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.634384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.634687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.634717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.639383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.639734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.639763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.644257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.644578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.644609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.649116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.649440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.649479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.653933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.654270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.654302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.658763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.659107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.659143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.663792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.664093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.664129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.668776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.669082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.669117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.673659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.673962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.674009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.679116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.679416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.679448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.684307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.684609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.684639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.689242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.689601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.689631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.694082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.694394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.694424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.698835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.699184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.699216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.703844] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.704146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.704187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.708686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.708991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.709021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.713526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.713829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.713859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.718659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.719025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.719119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.723978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.724292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.724322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.728801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.729105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.729146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.733597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.733900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.733930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.738354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.738656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.738686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.743227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.743621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.748104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.748429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.637 [2024-07-24 17:24:45.748458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.637 [2024-07-24 17:24:45.752896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.637 [2024-07-24 17:24:45.753211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.753259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.757694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.757995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.758025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.762576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.762878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.762908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.767586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.767888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.767918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.772454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.772756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.772786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.777275] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.777578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.777608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.782052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.782373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.782403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.786797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.787172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.787204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.791789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.792105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.792148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.796698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.797001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.801505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.801805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.801837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.806231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.806531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.806561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.810999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.811372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.811419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.815934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.816264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.816293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.820658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.820972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.821003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.825789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.826155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.826196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.831084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.831460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.831506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.835936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.836251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.836281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.840678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.840980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.841011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.845578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.845878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.845909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.850311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.850612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.850642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.855239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.855587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.855617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.860309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.860609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.860639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.638 [2024-07-24 17:24:45.865281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.638 [2024-07-24 17:24:45.865660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.638 [2024-07-24 17:24:45.865692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.898 [2024-07-24 17:24:45.870895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.898 [2024-07-24 17:24:45.871245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-24 17:24:45.871278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.898 [2024-07-24 17:24:45.876037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.898 [2024-07-24 17:24:45.876407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-24 17:24:45.876438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.898 [2024-07-24 17:24:45.880852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.898 [2024-07-24 17:24:45.881168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-24 17:24:45.881198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.898 [2024-07-24 17:24:45.885741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.898 [2024-07-24 17:24:45.886044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-24 17:24:45.886074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.898 [2024-07-24 17:24:45.890447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.898 [2024-07-24 17:24:45.890748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.898 [2024-07-24 17:24:45.890779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.898 [2024-07-24 17:24:45.895301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.895641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.895671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.900094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.900406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.900426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.904953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.905249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.905295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.909902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.910216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.910246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.914718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.915050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.915112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.919520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.919820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.919851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.924279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.924568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.924597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.929014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.929311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.929341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.933981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.934290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.934319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.939227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.939529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.939561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.944150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.944463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.944492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.948869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.949167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.949213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.953627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.953912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.953933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.958360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.958646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.958676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.963207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.963513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.963543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.968056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.968399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.968430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.972769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.973058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.973087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.977580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.977866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.977896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.982324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.982611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.982641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.987049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.987405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.987435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.991812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.992098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.992136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:45.996610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:45.996895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:45.996925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:46.001505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:46.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:46.001810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:46.006268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:46.006539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:46.006569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:46.010996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:46.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:46.011403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:46.015894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:46.016182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:46.016215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:46.020691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.899 [2024-07-24 17:24:46.020990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.899 [2024-07-24 17:24:46.021019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.899 [2024-07-24 17:24:46.025545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.025832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.025861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.030253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.030526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.030555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.035010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.035383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.035414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.039865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.040152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.040211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.044772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.045073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.045110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.049583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.049870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.049900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.054359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.054662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.054691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.059091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.059407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.059437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.063865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.064168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.064201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.068776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.069091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.069130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.073691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.073993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.074023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.078530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.078833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.078863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.083553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.083922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.083953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.088916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.089281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.089327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.094047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.094368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.094398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.098897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.099258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.099290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.103854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.104168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.104203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.108799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.109135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.113667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.113999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.114029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.118456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.118757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.118787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.123260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.123608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.123638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:59.900 [2024-07-24 17:24:46.128223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:23:59.900 [2024-07-24 17:24:46.128593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:59.900 [2024-07-24 17:24:46.128638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.133552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.133852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.133883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.138782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.139128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.139160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.143689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.143992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.144022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.148511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.148813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.148843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.153306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.153608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.153635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.158089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.158403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.158434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.162865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.163244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.163276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.167784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.168101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.168142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.172571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.172874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.172905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.177446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.177763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.177793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.182216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.182537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.182568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.186983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.187351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.187384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.191955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.192360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.192391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.197418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.197795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.197827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.202479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.202796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.202826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.207339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.207662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.207692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.212141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.212454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.212484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.161 [2024-07-24 17:24:46.216918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.161 [2024-07-24 17:24:46.217230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.161 [2024-07-24 17:24:46.217256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.221773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.222073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.222112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.226530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.226833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.226863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.231549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.231849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.231880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.236291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.236595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.236624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.241079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.241390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.241421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.245865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.246178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.246208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.250608] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.250909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.250939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.255621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.255944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.255991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.260933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.261286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.261321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.266302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.266642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.266679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.271548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.271865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.271896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.276947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.277281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.277313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.282472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.282797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.282830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.287787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.288096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.288120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.293395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.293687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.293711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.298889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.299256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.299289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.304242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.304571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.304603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.309308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.309616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.309647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.314309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.314618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.314649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.319259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.319550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.319583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.324258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.324584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.324616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.329297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.329590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.329621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.334149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.334461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.334492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.339015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.339354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.339386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.344025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.344369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.344414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.348930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.349253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.349286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.162 [2024-07-24 17:24:46.353896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.162 [2024-07-24 17:24:46.354216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.162 [2024-07-24 17:24:46.354246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.163 [2024-07-24 17:24:46.358809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.163 [2024-07-24 17:24:46.359148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.163 [2024-07-24 17:24:46.359181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.163 [2024-07-24 17:24:46.363766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.163 [2024-07-24 17:24:46.364096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.163 [2024-07-24 17:24:46.364138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:00.163 [2024-07-24 17:24:46.368782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.163 [2024-07-24 17:24:46.369094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.163 [2024-07-24 17:24:46.369134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:00.163 [2024-07-24 17:24:46.373680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.163 [2024-07-24 17:24:46.374005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.163 [2024-07-24 17:24:46.374037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:00.163 [2024-07-24 17:24:46.378388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x13c94c0) with pdu=0x2000190fef90 00:24:00.163 [2024-07-24 17:24:46.378515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.163 [2024-07-24 17:24:46.378536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:00.163 00:24:00.163 Latency(us) 00:24:00.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.163 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:00.163 nvme0n1 : 2.00 6117.19 764.65 0.00 0.00 2609.64 1854.37 5868.45 00:24:00.163 =================================================================================================================== 00:24:00.163 Total : 6117.19 764.65 0.00 0.00 2609.64 1854.37 5868.45 00:24:00.163 0 00:24:00.421 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:00.421 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:00.421 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:00.421 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:00.421 | .driver_specific 00:24:00.421 | .nvme_error 00:24:00.421 | .status_code 00:24:00.421 | .command_transient_transport_error' 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 395 > 0 )) 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93251 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 93251 ']' 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 93251 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93251 00:24:00.680 killing process with pid 93251 00:24:00.680 Received shutdown signal, test time was about 2.000000 seconds 00:24:00.680 00:24:00.680 Latency(us) 00:24:00.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.680 =================================================================================================================== 00:24:00.680 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93251' 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 93251 00:24:00.680 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 93251 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 92941 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 92941 ']' 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 92941 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92941 00:24:00.939 killing process with pid 92941 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92941' 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 92941 00:24:00.939 17:24:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 92941 00:24:00.939 ************************************ 00:24:00.939 END TEST nvmf_digest_error 00:24:00.939 ************************************ 00:24:00.939 00:24:00.939 real 0m18.451s 00:24:00.939 user 0m35.064s 00:24:00.939 sys 0m4.746s 00:24:00.939 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.939 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.198 rmmod nvme_tcp 00:24:01.198 rmmod nvme_fabrics 00:24:01.198 rmmod nvme_keyring 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 92941 ']' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 92941 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 92941 ']' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 92941 00:24:01.198 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (92941) - No such process 00:24:01.198 Process with pid 92941 is not found 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 92941 is not found' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:01.198 00:24:01.198 real 0m37.767s 00:24:01.198 user 1m10.725s 00:24:01.198 sys 0m9.644s 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:01.198 ************************************ 00:24:01.198 END TEST nvmf_digest 00:24:01.198 ************************************ 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.198 ************************************ 00:24:01.198 START TEST nvmf_mdns_discovery 00:24:01.198 ************************************ 00:24:01.198 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:01.457 * Looking for test storage... 00:24:01.457 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:01.457 Cannot find device "nvmf_tgt_br" 00:24:01.457 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:01.458 Cannot find device "nvmf_tgt_br2" 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:01.458 Cannot find device "nvmf_tgt_br" 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:01.458 Cannot find device "nvmf_tgt_br2" 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:01.458 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:01.458 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:01.458 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:01.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:24:01.716 00:24:01.716 --- 10.0.0.2 ping statistics --- 00:24:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.716 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:01.716 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:01.716 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:24:01.716 00:24:01.716 --- 10.0.0.3 ping statistics --- 00:24:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.716 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:01.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:24:01.716 00:24:01.716 --- 10.0.0.1 ping statistics --- 00:24:01.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.716 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:01.716 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=93545 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 93545 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93545 ']' 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:01.717 17:24:47 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.717 [2024-07-24 17:24:47.909346] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:01.717 [2024-07-24 17:24:47.909484] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.975 [2024-07-24 17:24:48.048316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.975 [2024-07-24 17:24:48.160519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.975 [2024-07-24 17:24:48.160585] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.975 [2024-07-24 17:24:48.160596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.975 [2024-07-24 17:24:48.160604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.975 [2024-07-24 17:24:48.160610] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.975 [2024-07-24 17:24:48.160636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:48 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 [2024-07-24 17:24:49.056468] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 [2024-07-24 17:24:49.064616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 null0 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 null1 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 null2 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 null3 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=93595 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 93595 /tmp/host.sock 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # '[' -z 93595 ']' 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:02.911 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:02.911 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:02.912 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:02.912 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.912 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:03.170 [2024-07-24 17:24:49.161904] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:03.170 [2024-07-24 17:24:49.162011] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93595 ] 00:24:03.170 [2024-07-24 17:24:49.297495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.429 [2024-07-24 17:24:49.415823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # return 0 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=93606 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:24:03.429 17:24:49 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:24:03.688 Process 985 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:24:03.688 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:24:03.688 Successfully dropped root privileges. 00:24:03.688 avahi-daemon 0.8 starting up. 00:24:03.688 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:24:03.688 Successfully called chroot(). 00:24:03.688 Successfully dropped remaining capabilities. 00:24:03.688 No service file found in /etc/avahi/services. 00:24:03.688 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:03.688 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:24:03.688 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:03.688 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:24:03.688 Network interface enumeration completed. 00:24:03.688 Registering new address record for fe80::587a:63ff:fef9:f6a7 on nvmf_tgt_if2.*. 00:24:03.688 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:24:03.688 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:24:03.688 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:24:04.624 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 2180697462. 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # notify_id=0 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_subsystem_names 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # get_bdev_list 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # [[ '' == '' ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_subsystem_names 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:04.624 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # get_bdev_list 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ '' == '' ]] 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_subsystem_names 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.883 [2024-07-24 17:24:50.976730] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # get_bdev_list 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:04.883 17:24:50 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.883 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@101 -- # [[ '' == '' ]] 00:24:04.883 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 [2024-07-24 17:24:51.037203] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@109 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@119 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 [2024-07-24 17:24:51.077118] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 [2024-07-24 17:24:51.085094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # rpc_cmd nvmf_publish_mdns_prr 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:04.884 17:24:51 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:24:05.819 [2024-07-24 17:24:51.876741] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:06.387 [2024-07-24 17:24:52.476782] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:06.387 [2024-07-24 17:24:52.476822] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:24:06.387 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:06.387 cookie is 0 00:24:06.387 is_local: 1 00:24:06.387 our_own: 0 00:24:06.387 wide_area: 0 00:24:06.387 multicast: 1 00:24:06.387 cached: 1 00:24:06.387 [2024-07-24 17:24:52.576757] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:06.387 [2024-07-24 17:24:52.576789] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:24:06.387 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:06.387 cookie is 0 00:24:06.387 is_local: 1 00:24:06.387 our_own: 0 00:24:06.387 wide_area: 0 00:24:06.387 multicast: 1 00:24:06.387 cached: 1 00:24:06.387 [2024-07-24 17:24:52.576818] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:24:06.645 [2024-07-24 17:24:52.676754] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:06.645 [2024-07-24 17:24:52.676778] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:24:06.645 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:06.645 cookie is 0 00:24:06.645 is_local: 1 00:24:06.645 our_own: 0 00:24:06.645 wide_area: 0 00:24:06.645 multicast: 1 00:24:06.645 cached: 1 00:24:06.645 [2024-07-24 17:24:52.776755] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:06.645 [2024-07-24 17:24:52.776781] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:24:06.645 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:06.645 cookie is 0 00:24:06.645 is_local: 1 00:24:06.645 our_own: 0 00:24:06.645 wide_area: 0 00:24:06.645 multicast: 1 00:24:06.645 cached: 1 00:24:06.645 [2024-07-24 17:24:52.776808] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:24:07.580 [2024-07-24 17:24:53.480892] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:07.580 [2024-07-24 17:24:53.480925] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:07.580 [2024-07-24 17:24:53.480973] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:07.580 [2024-07-24 17:24:53.567067] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:24:07.580 [2024-07-24 17:24:53.624305] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:07.580 [2024-07-24 17:24:53.624335] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:07.580 [2024-07-24 17:24:53.680604] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:07.580 [2024-07-24 17:24:53.680628] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:07.580 [2024-07-24 17:24:53.680660] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:07.580 [2024-07-24 17:24:53.766721] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:24:07.838 [2024-07-24 17:24:53.822624] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:07.838 [2024-07-24 17:24:53.822651] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:24:10.370 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=2 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.371 17:24:56 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:11.307 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=2 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.566 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.566 [2024-07-24 17:24:57.623996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.566 [2024-07-24 17:24:57.624923] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:11.566 [2024-07-24 17:24:57.624965] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:11.567 [2024-07-24 17:24:57.624999] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:11.567 [2024-07-24 17:24:57.625011] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.567 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.567 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:24:11.567 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.567 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:11.567 [2024-07-24 17:24:57.631895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:11.567 [2024-07-24 17:24:57.632927] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:11.567 [2024-07-24 17:24:57.632979] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:11.567 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.567 17:24:57 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:24:11.567 [2024-07-24 17:24:57.764041] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:24:11.567 [2024-07-24 17:24:57.765053] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:24:11.872 [2024-07-24 17:24:57.825334] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:11.872 [2024-07-24 17:24:57.825355] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.872 [2024-07-24 17:24:57.825361] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:11.872 [2024-07-24 17:24:57.825377] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:11.872 [2024-07-24 17:24:57.825428] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:11.872 [2024-07-24 17:24:57.825437] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:11.872 [2024-07-24 17:24:57.825442] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:11.872 [2024-07-24 17:24:57.825454] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:11.872 [2024-07-24 17:24:57.871235] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:11.872 [2024-07-24 17:24:57.871256] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:11.872 [2024-07-24 17:24:57.871301] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:11.872 [2024-07-24 17:24:57.871310] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:12.440 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.699 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 [2024-07-24 17:24:58.957037] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:12.960 [2024-07-24 17:24:58.957288] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:12.960 [2024-07-24 17:24:58.957529] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:12.960 [2024-07-24 17:24:58.957711] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 [2024-07-24 17:24:58.962076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.962146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.962162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.962172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.962183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.962192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.962202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.962211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.962220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:12.960 [2024-07-24 17:24:58.969048] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:12.960 [2024-07-24 17:24:58.969102] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:12.960 [2024-07-24 17:24:58.972079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.960 17:24:58 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:24:12.960 [2024-07-24 17:24:58.975460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.975489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.975502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.975510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.975535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.975543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.975552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:12.960 [2024-07-24 17:24:58.975560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:12.960 [2024-07-24 17:24:58.975568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.960 [2024-07-24 17:24:58.982099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.960 [2024-07-24 17:24:58.982418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.960 [2024-07-24 17:24:58.982466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.960 [2024-07-24 17:24:58.982480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.960 [2024-07-24 17:24:58.982511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.960 [2024-07-24 17:24:58.982543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.960 [2024-07-24 17:24:58.982552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:58.982562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.961 [2024-07-24 17:24:58.982579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:58.985394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:58.992344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.961 [2024-07-24 17:24:58.992449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:58.992471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.961 [2024-07-24 17:24:58.992490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:58.992522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:58.992551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:58.992559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:58.992568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.961 [2024-07-24 17:24:58.992581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:58.995407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.961 [2024-07-24 17:24:58.995570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:58.995590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.961 [2024-07-24 17:24:58.995600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:58.995615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:58.995639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:58.995648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:58.995656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.961 [2024-07-24 17:24:58.995678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.002425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.961 [2024-07-24 17:24:59.002526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.002547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.961 [2024-07-24 17:24:59.002557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.002573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.002588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.002597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.002605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.961 [2024-07-24 17:24:59.002619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.005492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.961 [2024-07-24 17:24:59.005595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.005616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.961 [2024-07-24 17:24:59.005627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.005644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.005675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.005686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.005695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.961 [2024-07-24 17:24:59.005710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.012541] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.961 [2024-07-24 17:24:59.012638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.012658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.961 [2024-07-24 17:24:59.012668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.012684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.012697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.012706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.012714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.961 [2024-07-24 17:24:59.012727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.015561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.961 [2024-07-24 17:24:59.015669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.015691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.961 [2024-07-24 17:24:59.015702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.015718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.015750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.015761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.015769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.961 [2024-07-24 17:24:59.015784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.022609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.961 [2024-07-24 17:24:59.022709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.022729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.961 [2024-07-24 17:24:59.022739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.022754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.022767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.022775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.022784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.961 [2024-07-24 17:24:59.022797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.025634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.961 [2024-07-24 17:24:59.025730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.025750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.961 [2024-07-24 17:24:59.025760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.025775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.025805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.025815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.025823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.961 [2024-07-24 17:24:59.025836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.032677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.961 [2024-07-24 17:24:59.032781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.032801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.961 [2024-07-24 17:24:59.032811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.961 [2024-07-24 17:24:59.032826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.961 [2024-07-24 17:24:59.032839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.961 [2024-07-24 17:24:59.032847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.961 [2024-07-24 17:24:59.032856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.961 [2024-07-24 17:24:59.032869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.961 [2024-07-24 17:24:59.035699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.961 [2024-07-24 17:24:59.035796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.961 [2024-07-24 17:24:59.035815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.962 [2024-07-24 17:24:59.035825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.035840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.035870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.035879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.035888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.962 [2024-07-24 17:24:59.035901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.042752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.962 [2024-07-24 17:24:59.042863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.042883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.962 [2024-07-24 17:24:59.042892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.042907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.042920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.042928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.042936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.962 [2024-07-24 17:24:59.042949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.045764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.962 [2024-07-24 17:24:59.045861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.045882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.962 [2024-07-24 17:24:59.045892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.045906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.045936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.045945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.045970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.962 [2024-07-24 17:24:59.045984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.052819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.962 [2024-07-24 17:24:59.052915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.052935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.962 [2024-07-24 17:24:59.052944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.052959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.052972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.052981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.052989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.962 [2024-07-24 17:24:59.053002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.055830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.962 [2024-07-24 17:24:59.055926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.055946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.962 [2024-07-24 17:24:59.055956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.055970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.056000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.056010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.056018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.962 [2024-07-24 17:24:59.056031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.062890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.962 [2024-07-24 17:24:59.063197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.063222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.962 [2024-07-24 17:24:59.063234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.063252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.063267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.063276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.063285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.962 [2024-07-24 17:24:59.063301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.065924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.962 [2024-07-24 17:24:59.066034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.066071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.962 [2024-07-24 17:24:59.066082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.066118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.066152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.066163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.066173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.962 [2024-07-24 17:24:59.066188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.073154] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.962 [2024-07-24 17:24:59.073252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.073272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.962 [2024-07-24 17:24:59.073282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.073297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.073310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.073318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.073326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.962 [2024-07-24 17:24:59.073356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.076006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.962 [2024-07-24 17:24:59.076103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.076158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.962 [2024-07-24 17:24:59.076172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.076189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.076202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.076211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.076235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.962 [2024-07-24 17:24:59.076249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.962 [2024-07-24 17:24:59.083224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.962 [2024-07-24 17:24:59.083332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.962 [2024-07-24 17:24:59.083354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.962 [2024-07-24 17:24:59.083364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.962 [2024-07-24 17:24:59.083380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.962 [2024-07-24 17:24:59.083414] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.962 [2024-07-24 17:24:59.083424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.962 [2024-07-24 17:24:59.083434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.962 [2024-07-24 17:24:59.083448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.963 [2024-07-24 17:24:59.086074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.963 [2024-07-24 17:24:59.086200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.963 [2024-07-24 17:24:59.086237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.963 [2024-07-24 17:24:59.086248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.963 [2024-07-24 17:24:59.086264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.963 [2024-07-24 17:24:59.086278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.963 [2024-07-24 17:24:59.086286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.963 [2024-07-24 17:24:59.086296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.963 [2024-07-24 17:24:59.086310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.963 [2024-07-24 17:24:59.093297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:12.963 [2024-07-24 17:24:59.093400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.963 [2024-07-24 17:24:59.093422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf6b0 with addr=10.0.0.2, port=4420 00:24:12.963 [2024-07-24 17:24:59.093447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf6b0 is same with the state(5) to be set 00:24:12.963 [2024-07-24 17:24:59.093464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf6b0 (9): Bad file descriptor 00:24:12.963 [2024-07-24 17:24:59.093494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:12.963 [2024-07-24 17:24:59.093519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:12.963 [2024-07-24 17:24:59.093544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:12.963 [2024-07-24 17:24:59.093558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.963 [2024-07-24 17:24:59.096173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:12.963 [2024-07-24 17:24:59.096271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:12.963 [2024-07-24 17:24:59.096292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1eaf380 with addr=10.0.0.3, port=4420 00:24:12.963 [2024-07-24 17:24:59.096302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eaf380 is same with the state(5) to be set 00:24:12.963 [2024-07-24 17:24:59.096318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eaf380 (9): Bad file descriptor 00:24:12.963 [2024-07-24 17:24:59.096332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:12.963 [2024-07-24 17:24:59.096340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:12.963 [2024-07-24 17:24:59.096349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:12.963 [2024-07-24 17:24:59.096363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:12.963 [2024-07-24 17:24:59.100502] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:24:12.963 [2024-07-24 17:24:59.100529] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:12.963 [2024-07-24 17:24:59.100549] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:12.963 [2024-07-24 17:24:59.100582] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:12.963 [2024-07-24 17:24:59.100596] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:12.963 [2024-07-24 17:24:59.100609] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:12.963 [2024-07-24 17:24:59.186611] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:12.963 [2024-07-24 17:24:59.186662] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:13.900 17:24:59 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.900 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:13.901 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=0 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=4 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.160 17:25:00 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:24:14.160 [2024-07-24 17:25:00.276777] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:24:15.096 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # jq '. | length' 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notification_count=4 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # notify_id=8 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:15.355 [2024-07-24 17:25:01.523035] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:24:15.355 2024/07/24 17:25:01 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:15.355 request: 00:24:15.355 { 00:24:15.355 "method": "bdev_nvme_start_mdns_discovery", 00:24:15.355 "params": { 00:24:15.355 "name": "mdns", 00:24:15.355 "svcname": "_nvme-disc._http", 00:24:15.355 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:15.355 } 00:24:15.355 } 00:24:15.355 Got JSON-RPC error response 00:24:15.355 GoRPCClient: error on JSON-RPC call 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:15.355 17:25:01 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:24:15.923 [2024-07-24 17:25:02.111799] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:16.182 [2024-07-24 17:25:02.211800] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:16.182 [2024-07-24 17:25:02.311804] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:16.182 [2024-07-24 17:25:02.311825] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:24:16.182 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:16.182 cookie is 0 00:24:16.182 is_local: 1 00:24:16.182 our_own: 0 00:24:16.182 wide_area: 0 00:24:16.182 multicast: 1 00:24:16.182 cached: 1 00:24:16.182 [2024-07-24 17:25:02.411807] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:16.182 [2024-07-24 17:25:02.411829] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:24:16.182 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:16.182 cookie is 0 00:24:16.182 is_local: 1 00:24:16.182 our_own: 0 00:24:16.182 wide_area: 0 00:24:16.182 multicast: 1 00:24:16.182 cached: 1 00:24:16.182 [2024-07-24 17:25:02.411841] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:24:16.440 [2024-07-24 17:25:02.511812] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:24:16.440 [2024-07-24 17:25:02.511839] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:24:16.440 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:16.440 cookie is 0 00:24:16.440 is_local: 1 00:24:16.440 our_own: 0 00:24:16.440 wide_area: 0 00:24:16.440 multicast: 1 00:24:16.440 cached: 1 00:24:16.440 [2024-07-24 17:25:02.611807] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:24:16.440 [2024-07-24 17:25:02.611829] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:24:16.440 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:24:16.440 cookie is 0 00:24:16.440 is_local: 1 00:24:16.440 our_own: 0 00:24:16.440 wide_area: 0 00:24:16.440 multicast: 1 00:24:16.440 cached: 1 00:24:16.440 [2024-07-24 17:25:02.611854] bdev_mdns_client.c: 322:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.2 trid->trsvcid: 8009 00:24:17.398 [2024-07-24 17:25:03.322679] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:17.398 [2024-07-24 17:25:03.322719] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:17.398 [2024-07-24 17:25:03.322752] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:17.398 [2024-07-24 17:25:03.408770] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:24:17.398 [2024-07-24 17:25:03.469034] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:17.398 [2024-07-24 17:25:03.469062] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:17.398 [2024-07-24 17:25:03.522350] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:17.398 [2024-07-24 17:25:03.522372] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:17.398 [2024-07-24 17:25:03.522404] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:17.398 [2024-07-24 17:25:03.608836] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:24:17.662 [2024-07-24 17:25:03.669079] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:17.662 [2024-07-24 17:25:03.669121] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # local es=0 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 [2024-07-24 17:25:06.718629] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:24:20.951 2024/07/24 17:25:06 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:20.951 request: 00:24:20.951 { 00:24:20.951 "method": "bdev_nvme_start_mdns_discovery", 00:24:20.951 "params": { 00:24:20.951 "name": "cdc", 00:24:20.951 "svcname": "_nvme-disc._tcp", 00:24:20.951 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:20.951 } 00:24:20.951 } 00:24:20.951 Got JSON-RPC error response 00:24:20.951 GoRPCClient: error on JSON-RPC call 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@653 -- # es=1 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_stop_mdns_prr 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # trap - SIGINT SIGTERM EXIT 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # kill 93595 00:24:20.951 17:25:06 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # wait 93595 00:24:20.951 [2024-07-24 17:25:06.968039] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # kill 93606 00:24:20.951 Got SIGTERM, quitting. 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # nvmftestfini 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:24:20.951 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:20.951 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:20.951 avahi-daemon 0.8 exiting. 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.951 rmmod nvme_tcp 00:24:20.951 rmmod nvme_fabrics 00:24:20.951 rmmod nvme_keyring 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 93545 ']' 00:24:20.951 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 93545 00:24:20.952 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # '[' -z 93545 ']' 00:24:20.952 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # kill -0 93545 00:24:20.952 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # uname 00:24:21.210 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:21.210 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 93545 00:24:21.211 killing process with pid 93545 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 93545' 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@969 -- # kill 93545 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@974 -- # wait 93545 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.211 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:21.470 00:24:21.470 real 0m20.068s 00:24:21.470 user 0m39.220s 00:24:21.470 sys 0m1.999s 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:21.470 ************************************ 00:24:21.470 END TEST nvmf_mdns_discovery 00:24:21.470 ************************************ 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.470 ************************************ 00:24:21.470 START TEST nvmf_host_multipath 00:24:21.470 ************************************ 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:21.470 * Looking for test storage... 00:24:21.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:21.470 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:21.471 Cannot find device "nvmf_tgt_br" 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:21.471 Cannot find device "nvmf_tgt_br2" 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:21.471 Cannot find device "nvmf_tgt_br" 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:21.471 Cannot find device "nvmf_tgt_br2" 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:24:21.471 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:21.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:21.730 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:21.730 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:22.010 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:22.010 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:22.010 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:22.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:24:22.010 00:24:22.010 --- 10.0.0.2 ping statistics --- 00:24:22.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.010 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:22.011 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:22.011 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:24:22.011 00:24:22.011 --- 10.0.0.3 ping statistics --- 00:24:22.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.011 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:22.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:22.011 00:24:22.011 --- 10.0.0.1 ping statistics --- 00:24:22.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.011 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.011 17:25:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=94174 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 94174 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94174 ']' 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:22.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:22.011 17:25:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:22.011 [2024-07-24 17:25:08.071496] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:24:22.011 [2024-07-24 17:25:08.071599] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.011 [2024-07-24 17:25:08.222503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:22.269 [2024-07-24 17:25:08.332905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.269 [2024-07-24 17:25:08.332976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.269 [2024-07-24 17:25:08.333003] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.269 [2024-07-24 17:25:08.333011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.269 [2024-07-24 17:25:08.333018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.269 [2024-07-24 17:25:08.333117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.269 [2024-07-24 17:25:08.333131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94174 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:23.204 [2024-07-24 17:25:09.373888] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.204 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:23.463 Malloc0 00:24:23.721 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:23.721 17:25:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.288 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:24.288 [2024-07-24 17:25:10.461859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:24.288 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:24.546 [2024-07-24 17:25:10.721979] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=94273 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 94273 /var/tmp/bdevperf.sock 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94273 ']' 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.546 17:25:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:25.481 17:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.481 17:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:24:25.481 17:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:25.738 17:25:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:26.305 Nvme0n1 00:24:26.305 17:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:26.563 Nvme0n1 00:24:26.563 17:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:26.563 17:25:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:27.536 17:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:27.536 17:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:27.795 17:25:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.060 17:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:28.060 17:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94366 00:24:28.060 17:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:28.060 17:25:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:34.630 Attaching 4 probes... 00:24:34.630 @path[10.0.0.2, 4421]: 16698 00:24:34.630 @path[10.0.0.2, 4421]: 18564 00:24:34.630 @path[10.0.0.2, 4421]: 18524 00:24:34.630 @path[10.0.0.2, 4421]: 18355 00:24:34.630 @path[10.0.0.2, 4421]: 18771 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:34.630 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94366 00:24:34.631 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:34.631 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:24:34.631 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.631 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:34.889 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:24:34.889 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94495 00:24:34.889 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:34.889 17:25:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:41.449 17:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:41.449 17:25:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:41.449 Attaching 4 probes... 00:24:41.449 @path[10.0.0.2, 4420]: 18066 00:24:41.449 @path[10.0.0.2, 4420]: 18062 00:24:41.449 @path[10.0.0.2, 4420]: 18234 00:24:41.449 @path[10.0.0.2, 4420]: 18381 00:24:41.449 @path[10.0.0.2, 4420]: 18288 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94495 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:41.449 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:41.707 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:24:41.707 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:41.707 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94628 00:24:41.707 17:25:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:48.273 17:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:48.273 17:25:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:48.273 Attaching 4 probes... 00:24:48.273 @path[10.0.0.2, 4421]: 15139 00:24:48.273 @path[10.0.0.2, 4421]: 18447 00:24:48.273 @path[10.0.0.2, 4421]: 18254 00:24:48.273 @path[10.0.0.2, 4421]: 18232 00:24:48.273 @path[10.0.0.2, 4421]: 18348 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94628 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:48.273 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:48.531 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:24:48.531 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94763 00:24:48.531 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:48.531 17:25:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:55.092 Attaching 4 probes... 00:24:55.092 00:24:55.092 00:24:55.092 00:24:55.092 00:24:55.092 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:24:55.092 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:24:55.093 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94763 00:24:55.093 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:55.093 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:24:55.093 17:25:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:55.093 17:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:55.359 17:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:24:55.359 17:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:55.359 17:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=94889 00:24:55.359 17:25:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:01.925 Attaching 4 probes... 00:25:01.925 @path[10.0.0.2, 4421]: 17573 00:25:01.925 @path[10.0.0.2, 4421]: 17732 00:25:01.925 @path[10.0.0.2, 4421]: 18176 00:25:01.925 @path[10.0.0.2, 4421]: 18507 00:25:01.925 @path[10.0.0.2, 4421]: 17782 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 94889 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:01.925 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:01.925 [2024-07-24 17:25:47.940474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.925 [2024-07-24 17:25:47.940916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.940996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.926 [2024-07-24 17:25:47.941444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 [2024-07-24 17:25:47.941646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2264330 is same with the state(5) to be set 00:25:01.927 17:25:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:02.863 17:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:02.863 17:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95025 00:25:02.863 17:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:02.863 17:25:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:09.427 17:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:09.427 17:25:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:09.427 Attaching 4 probes... 00:25:09.427 @path[10.0.0.2, 4420]: 17977 00:25:09.427 @path[10.0.0.2, 4420]: 18029 00:25:09.427 @path[10.0.0.2, 4420]: 18182 00:25:09.427 @path[10.0.0.2, 4420]: 18003 00:25:09.427 @path[10.0.0.2, 4420]: 17746 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95025 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:09.427 [2024-07-24 17:25:55.449282] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:09.427 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:09.686 17:25:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:16.245 17:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:16.245 17:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95217 00:25:16.245 17:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94174 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:16.245 17:26:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:22.828 17:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:22.828 17:26:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:22.828 Attaching 4 probes... 00:25:22.828 @path[10.0.0.2, 4421]: 17889 00:25:22.828 @path[10.0.0.2, 4421]: 18078 00:25:22.828 @path[10.0.0.2, 4421]: 18017 00:25:22.828 @path[10.0.0.2, 4421]: 18740 00:25:22.828 @path[10.0.0.2, 4421]: 18177 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95217 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 94273 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94273 ']' 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94273 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94273 00:25:22.828 killing process with pid 94273 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94273' 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94273 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94273 00:25:22.828 Connection closed with partial response: 00:25:22.828 00:25:22.828 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 94273 00:25:22.828 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:22.828 [2024-07-24 17:25:10.799045] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:22.828 [2024-07-24 17:25:10.799196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94273 ] 00:25:22.828 [2024-07-24 17:25:10.942410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.828 [2024-07-24 17:25:11.063877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.828 Running I/O for 90 seconds... 00:25:22.828 [2024-07-24 17:25:20.956049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.828 [2024-07-24 17:25:20.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.828 [2024-07-24 17:25:20.956199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.828 [2024-07-24 17:25:20.956238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.828 [2024-07-24 17:25:20.956274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.828 [2024-07-24 17:25:20.956309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.828 [2024-07-24 17:25:20.956345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.828 [2024-07-24 17:25:20.956380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.828 [2024-07-24 17:25:20.956401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.956969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.956989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.957971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.957989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.958010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.958025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.958046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.958060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.958081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.958095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.958116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.829 [2024-07-24 17:25:20.958130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.829 [2024-07-24 17:25:20.958151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.958969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.958982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.959003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.959017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.959038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.959051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.959071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.959142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.961163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.961188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.961211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.961226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.961247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.961262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.961283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.961298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.961319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.961334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.962293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.962321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.962348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.830 [2024-07-24 17:25:20.962364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.830 [2024-07-24 17:25:20.962398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.830 [2024-07-24 17:25:20.962413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.962981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.962995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.963574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.963981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.964005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.831 [2024-07-24 17:25:20.964030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.831 [2024-07-24 17:25:20.964046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.964972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.964986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.965007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.965021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.965041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.965056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.965083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.965110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.965134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.965149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.965169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.832 [2024-07-24 17:25:20.965184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.832 [2024-07-24 17:25:20.965205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.965974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.965989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.966010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.966035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.966058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.966072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.966093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.966107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.966128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.966142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.966174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.833 [2024-07-24 17:25:20.966191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.833 [2024-07-24 17:25:20.966212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.966226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.966247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.966261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.966282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.966296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.966317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.966332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.966353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.966367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.966388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.966403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.967985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.967999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.968034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.968084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.968118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.968167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.968203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.834 [2024-07-24 17:25:20.968238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.834 [2024-07-24 17:25:20.968258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.835 [2024-07-24 17:25:20.968452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.968975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.968996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.835 [2024-07-24 17:25:20.969455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.835 [2024-07-24 17:25:20.969469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.969490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.969518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.969539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.969552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.969572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.969586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.970966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.970980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.971000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.971013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.971033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.971048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.971074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.971147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.971172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.971187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.971208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.971222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.971243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.836 [2024-07-24 17:25:20.982569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.836 [2024-07-24 17:25:20.982589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.982984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.982997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.983952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.983980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.984006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.984036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.984054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.837 [2024-07-24 17:25:20.984084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.837 [2024-07-24 17:25:20.984123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.985970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.985989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.838 [2024-07-24 17:25:20.986796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.838 [2024-07-24 17:25:20.986825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.986843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.986871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.986890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.986919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.839 [2024-07-24 17:25:20.986938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.986967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.986985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.987968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.987996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.839 [2024-07-24 17:25:20.988403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.839 [2024-07-24 17:25:20.988433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.988452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.988483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.988509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.989973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.989994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.990972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.990992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.991021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.991041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.991071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.991145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.991179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.991199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.991227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.840 [2024-07-24 17:25:20.991246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.840 [2024-07-24 17:25:20.991274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.991971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.991991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.992812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.992831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.841 [2024-07-24 17:25:20.993850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.841 [2024-07-24 17:25:20.993896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.993938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.993959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.993988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.994981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.994999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.842 [2024-07-24 17:25:20.995524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.842 [2024-07-24 17:25:20.995580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.842 [2024-07-24 17:25:20.995594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.995973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.996582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.996596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.843 [2024-07-24 17:25:20.997589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.843 [2024-07-24 17:25:20.997609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.997971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.997984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.844 [2024-07-24 17:25:20.998577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.844 [2024-07-24 17:25:20.998597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.998974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.998995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:20.999668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:20.999682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:21.000453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:21.000479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:21.000517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:21.000534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:21.000569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:21.000583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.845 [2024-07-24 17:25:21.000603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.845 [2024-07-24 17:25:21.000616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.000980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.000994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.846 [2024-07-24 17:25:21.001755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.846 [2024-07-24 17:25:21.001821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.846 [2024-07-24 17:25:21.001841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.001860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.001881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.001895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.001915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.001945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.001966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.001985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.002780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.002795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.847 [2024-07-24 17:25:21.003835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.847 [2024-07-24 17:25:21.003849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.003870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.003884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.003905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.003919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.003940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.003964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.003987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.004966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.004979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.005000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.005014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.005035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.005049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.005069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.848 [2024-07-24 17:25:21.005083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.848 [2024-07-24 17:25:21.005116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.005655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.005669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.013066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.013118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.013145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.013175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.013198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.013212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.013234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.013248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.013285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.849 [2024-07-24 17:25:21.014571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.849 [2024-07-24 17:25:21.014591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.014969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.014989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.850 [2024-07-24 17:25:21.015637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.850 [2024-07-24 17:25:21.015762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.850 [2024-07-24 17:25:21.015777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.015797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.015811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.015832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.015846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.015867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.015881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.015902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.015915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.015936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.015950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.015978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.015998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.016601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.016615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:21.017431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:21.017460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.851 [2024-07-24 17:25:27.528833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.851 [2024-07-24 17:25:27.528852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.528866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.528885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.528898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.528918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.528931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.528950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.528963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.528982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.528995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.852 [2024-07-24 17:25:27.529773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.529973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.529993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.530006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:22.852 [2024-07-24 17:25:27.530026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.852 [2024-07-24 17:25:27.530039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:30104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:30152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.530973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.530988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:30200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:30280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.853 [2024-07-24 17:25:27.531702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.853 [2024-07-24 17:25:27.531716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.531754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:30328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.531791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.531830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.531868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.531907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.531973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.531999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:30376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:30392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:30408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:30424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:30440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:30472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:30504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.854 [2024-07-24 17:25:27.532817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.854 [2024-07-24 17:25:27.532831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.532856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:27.532870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.532895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:27.532909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.532933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:27.532948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.532979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.532995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.533967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.533981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.534017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.534032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.534063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.534077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.534120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:27.534138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:27.534170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:27.534185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:34.552828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.855 [2024-07-24 17:25:34.552905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:34.552978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:34.552996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:34.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:34.553031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:34.553050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:34.553063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:22.855 [2024-07-24 17:25:34.553083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.855 [2024-07-24 17:25:34.553098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.553967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.553987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:22.856 [2024-07-24 17:25:34.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.856 [2024-07-24 17:25:34.554587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.554973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.554987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.555614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.555628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.557806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.857 [2024-07-24 17:25:34.557820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:22.857 [2024-07-24 17:25:34.558019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:34.558045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.942979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.942999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.858 [2024-07-24 17:25:47.943264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.858 [2024-07-24 17:25:47.943277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:11344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.943981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.943995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.944008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.944028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.944041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.944056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.944069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.859 [2024-07-24 17:25:47.944084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.859 [2024-07-24 17:25:47.944097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.860 [2024-07-24 17:25:47.944770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.944977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.944996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.945010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.945026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.945039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.945055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.945068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.945083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.945096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.860 [2024-07-24 17:25:47.945111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.860 [2024-07-24 17:25:47.945133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.945973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.945993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.946008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.946023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.946036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.946050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.946063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.946078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.946091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.946106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:22.861 [2024-07-24 17:25:47.946119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.946138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15aa250 is same with the state(5) to be set 00:25:22.861 [2024-07-24 17:25:47.946163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:22.861 [2024-07-24 17:25:47.946176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:22.861 [2024-07-24 17:25:47.946186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11624 len:8 PRP1 0x0 PRP2 0x0 00:25:22.861 [2024-07-24 17:25:47.946199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:22.861 [2024-07-24 17:25:47.946257] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15aa250 was disconnected and freed. reset controller. 00:25:22.862 [2024-07-24 17:25:47.947686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.862 [2024-07-24 17:25:47.947772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15269c0 (9): Bad file descriptor 00:25:22.862 [2024-07-24 17:25:47.947893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.862 [2024-07-24 17:25:47.947920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15269c0 with addr=10.0.0.2, port=4421 00:25:22.862 [2024-07-24 17:25:47.947935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15269c0 is same with the state(5) to be set 00:25:22.862 [2024-07-24 17:25:47.947958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15269c0 (9): Bad file descriptor 00:25:22.862 [2024-07-24 17:25:47.947979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:22.862 [2024-07-24 17:25:47.948007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:22.862 [2024-07-24 17:25:47.948022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:22.862 [2024-07-24 17:25:47.948045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:22.862 [2024-07-24 17:25:47.948074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:22.862 [2024-07-24 17:25:58.025125] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:22.862 Received shutdown signal, test time was about 55.343250 seconds 00:25:22.862 00:25:22.862 Latency(us) 00:25:22.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.862 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:22.862 Verification LBA range: start 0x0 length 0x4000 00:25:22.862 Nvme0n1 : 55.34 7770.76 30.35 0.00 0.00 16441.96 737.28 7046430.72 00:25:22.862 =================================================================================================================== 00:25:22.862 Total : 7770.76 30.35 0.00 0.00 16441.96 737.28 7046430.72 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:22.862 rmmod nvme_tcp 00:25:22.862 rmmod nvme_fabrics 00:25:22.862 rmmod nvme_keyring 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 94174 ']' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 94174 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94174 ']' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94174 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94174 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94174' 00:25:22.862 killing process with pid 94174 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94174 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94174 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.862 17:26:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.862 17:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:22.862 00:25:22.862 real 1m1.523s 00:25:22.862 user 2m54.095s 00:25:22.862 sys 0m13.491s 00:25:22.862 17:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.862 17:26:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:22.862 ************************************ 00:25:22.862 END TEST nvmf_host_multipath 00:25:22.862 ************************************ 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.122 ************************************ 00:25:23.122 START TEST nvmf_timeout 00:25:23.122 ************************************ 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:23.122 * Looking for test storage... 00:25:23.122 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:23.122 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:23.123 Cannot find device "nvmf_tgt_br" 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:23.123 Cannot find device "nvmf_tgt_br2" 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:23.123 Cannot find device "nvmf_tgt_br" 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:23.123 Cannot find device "nvmf_tgt_br2" 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:23.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:25:23.123 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:23.383 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:23.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:23.383 00:25:23.383 --- 10.0.0.2 ping statistics --- 00:25:23.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.383 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:23.383 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:23.383 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:25:23.383 00:25:23.383 --- 10.0.0.3 ping statistics --- 00:25:23.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.383 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:23.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:23.383 00:25:23.383 --- 10.0.0.1 ping statistics --- 00:25:23.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.383 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=95547 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 95547 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95547 ']' 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.383 17:26:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.642 [2024-07-24 17:26:09.618891] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:23.642 [2024-07-24 17:26:09.618988] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.642 [2024-07-24 17:26:09.758571] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:23.642 [2024-07-24 17:26:09.874924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.642 [2024-07-24 17:26:09.875013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.642 [2024-07-24 17:26:09.875038] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.642 [2024-07-24 17:26:09.875055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.642 [2024-07-24 17:26:09.875069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.901 [2024-07-24 17:26:09.875555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.901 [2024-07-24 17:26:09.875618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.467 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:24.744 [2024-07-24 17:26:10.945051] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.014 17:26:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:25.014 Malloc0 00:25:25.014 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.272 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.531 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.789 [2024-07-24 17:26:11.969331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=95638 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 95638 /var/tmp/bdevperf.sock 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95638 ']' 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:25.789 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:25.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:25.790 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:25.790 17:26:11 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:26.048 [2024-07-24 17:26:12.036890] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:26.048 [2024-07-24 17:26:12.037004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95638 ] 00:25:26.048 [2024-07-24 17:26:12.174840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.306 [2024-07-24 17:26:12.295734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:26.874 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:26.874 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:26.874 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:27.221 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:27.480 NVMe0n1 00:25:27.480 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.480 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=95686 00:25:27.480 17:26:13 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:27.480 Running I/O for 10 seconds... 00:25:28.417 17:26:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.678 [2024-07-24 17:26:14.819638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.678 [2024-07-24 17:26:14.819880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.819996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240d730 is same with the state(5) to be set 00:25:28.679 [2024-07-24 17:26:14.820521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.820982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.820993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.679 [2024-07-24 17:26:14.821002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.679 [2024-07-24 17:26:14.821013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.680 [2024-07-24 17:26:14.821417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:86664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:86672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:86680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:86696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:86728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:86760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.680 [2024-07-24 17:26:14.821829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:86800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.680 [2024-07-24 17:26:14.821837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.821847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:86808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.821871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.821900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.821909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.821921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.821930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.821941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.821950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.821962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.821971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.821982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.821992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:86920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:86944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:86984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.681 [2024-07-24 17:26:14.822661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.681 [2024-07-24 17:26:14.822671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.822988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.822999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.682 [2024-07-24 17:26:14.823486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.682 [2024-07-24 17:26:14.823496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b28d0 is same with the state(5) to be set 00:25:28.682 [2024-07-24 17:26:14.823508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:28.682 [2024-07-24 17:26:14.823516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:28.682 [2024-07-24 17:26:14.823524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87368 len:8 PRP1 0x0 PRP2 0x0 00:25:28.683 [2024-07-24 17:26:14.823533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.683 [2024-07-24 17:26:14.823585] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9b28d0 was disconnected and freed. reset controller. 00:25:28.683 [2024-07-24 17:26:14.823869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.683 [2024-07-24 17:26:14.823952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x945240 (9): Bad file descriptor 00:25:28.683 [2024-07-24 17:26:14.824058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.683 [2024-07-24 17:26:14.824079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x945240 with addr=10.0.0.2, port=4420 00:25:28.683 [2024-07-24 17:26:14.824090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945240 is same with the state(5) to be set 00:25:28.683 [2024-07-24 17:26:14.824111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x945240 (9): Bad file descriptor 00:25:28.683 [2024-07-24 17:26:14.824141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.683 [2024-07-24 17:26:14.824153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.683 [2024-07-24 17:26:14.824164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.683 [2024-07-24 17:26:14.824184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.683 [2024-07-24 17:26:14.824200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.683 17:26:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:25:31.215 [2024-07-24 17:26:16.824471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.215 [2024-07-24 17:26:16.824536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x945240 with addr=10.0.0.2, port=4420 00:25:31.215 [2024-07-24 17:26:16.824554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945240 is same with the state(5) to be set 00:25:31.215 [2024-07-24 17:26:16.824583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x945240 (9): Bad file descriptor 00:25:31.215 [2024-07-24 17:26:16.824615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.215 [2024-07-24 17:26:16.824628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.215 [2024-07-24 17:26:16.824639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.215 [2024-07-24 17:26:16.824666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.215 [2024-07-24 17:26:16.824677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.215 17:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:25:31.215 17:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:31.215 17:26:16 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:31.215 17:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:25:31.215 17:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:25:31.216 17:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:31.216 17:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:31.473 17:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:25:31.473 17:26:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:25:32.873 [2024-07-24 17:26:18.824805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.873 [2024-07-24 17:26:18.824882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x945240 with addr=10.0.0.2, port=4420 00:25:32.873 [2024-07-24 17:26:18.824898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x945240 is same with the state(5) to be set 00:25:32.873 [2024-07-24 17:26:18.824923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x945240 (9): Bad file descriptor 00:25:32.873 [2024-07-24 17:26:18.824941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:32.873 [2024-07-24 17:26:18.824950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:32.873 [2024-07-24 17:26:18.824961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:32.873 [2024-07-24 17:26:18.824986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:32.873 [2024-07-24 17:26:18.824997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.782 [2024-07-24 17:26:20.825151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.782 [2024-07-24 17:26:20.825237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:34.782 [2024-07-24 17:26:20.825248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:34.782 [2024-07-24 17:26:20.825258] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:25:34.782 [2024-07-24 17:26:20.825286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:35.719 00:25:35.719 Latency(us) 00:25:35.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.719 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:35.719 Verification LBA range: start 0x0 length 0x4000 00:25:35.719 NVMe0n1 : 8.17 1321.82 5.16 15.66 0.00 95564.57 2219.29 7015926.69 00:25:35.719 =================================================================================================================== 00:25:35.719 Total : 1321.82 5.16 15.66 0.00 95564.57 2219.29 7015926.69 00:25:35.719 0 00:25:36.287 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:25:36.287 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.287 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:25:36.545 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:25:36.545 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:25:36.545 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:25:36.546 17:26:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 95686 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 95638 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95638 ']' 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95638 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:36.804 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95638 00:25:37.063 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:37.063 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:37.063 killing process with pid 95638 00:25:37.063 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95638' 00:25:37.063 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95638 00:25:37.063 Received shutdown signal, test time was about 9.386476 seconds 00:25:37.063 00:25:37.063 Latency(us) 00:25:37.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.063 =================================================================================================================== 00:25:37.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:37.063 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95638 00:25:37.063 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:37.327 [2024-07-24 17:26:23.461836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=95838 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 95838 /var/tmp/bdevperf.sock 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 95838 ']' 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.327 17:26:23 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:37.327 [2024-07-24 17:26:23.533818] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:37.327 [2024-07-24 17:26:23.533912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95838 ] 00:25:37.586 [2024-07-24 17:26:23.667539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.586 [2024-07-24 17:26:23.777181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:38.519 17:26:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.519 17:26:24 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:25:38.519 17:26:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:38.519 17:26:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:25:38.777 NVMe0n1 00:25:39.037 17:26:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:39.037 17:26:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=95891 00:25:39.037 17:26:25 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:25:39.037 Running I/O for 10 seconds... 00:25:39.973 17:26:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.261 [2024-07-24 17:26:26.238396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2465e10 is same with the state(5) to be set 00:25:40.261 [2024-07-24 17:26:26.238927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.238968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.238990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.261 [2024-07-24 17:26:26.239420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.261 [2024-07-24 17:26:26.239431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.262 [2024-07-24 17:26:26.239661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:87032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:87056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:87088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:87096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:87104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.239987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.239998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.262 [2024-07-24 17:26:26.240179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.262 [2024-07-24 17:26:26.240190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.263 [2024-07-24 17:26:26.240510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.263 [2024-07-24 17:26:26.240919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.263 [2024-07-24 17:26:26.240928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.240939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.240948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.240959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.240968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.240979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.240988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.240999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:40.264 [2024-07-24 17:26:26.241307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:40.264 [2024-07-24 17:26:26.241624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:40.264 [2024-07-24 17:26:26.241661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:40.264 [2024-07-24 17:26:26.241669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87456 len:8 PRP1 0x0 PRP2 0x0 00:25:40.264 [2024-07-24 17:26:26.241685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.264 [2024-07-24 17:26:26.241739] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf8db20 was disconnected and freed. reset controller. 00:25:40.264 [2024-07-24 17:26:26.241984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.264 [2024-07-24 17:26:26.242067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:40.264 [2024-07-24 17:26:26.242204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:40.265 [2024-07-24 17:26:26.242227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf20240 with addr=10.0.0.2, port=4420 00:25:40.265 [2024-07-24 17:26:26.242239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:40.265 [2024-07-24 17:26:26.242258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:40.265 [2024-07-24 17:26:26.242274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:40.265 [2024-07-24 17:26:26.242284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:40.265 [2024-07-24 17:26:26.242295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.265 [2024-07-24 17:26:26.242316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:40.265 [2024-07-24 17:26:26.242327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.265 17:26:26 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:25:41.201 [2024-07-24 17:26:27.242445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:41.201 [2024-07-24 17:26:27.242512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf20240 with addr=10.0.0.2, port=4420 00:25:41.201 [2024-07-24 17:26:27.242528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:41.201 [2024-07-24 17:26:27.242552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:41.201 [2024-07-24 17:26:27.242570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:41.201 [2024-07-24 17:26:27.242580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:41.201 [2024-07-24 17:26:27.242591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:41.201 [2024-07-24 17:26:27.242618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:41.201 [2024-07-24 17:26:27.242631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:41.201 17:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.460 [2024-07-24 17:26:27.517828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.460 17:26:27 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 95891 00:25:42.029 [2024-07-24 17:26:28.257569] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:50.148 00:25:50.148 Latency(us) 00:25:50.148 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:50.148 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:50.148 Verification LBA range: start 0x0 length 0x4000 00:25:50.148 NVMe0n1 : 10.01 6935.49 27.09 0.00 0.00 18425.13 1861.82 3019898.88 00:25:50.148 =================================================================================================================== 00:25:50.148 Total : 6935.49 27.09 0.00 0.00 18425.13 1861.82 3019898.88 00:25:50.148 0 00:25:50.148 17:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96009 00:25:50.148 17:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:50.148 17:26:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:25:50.148 Running I/O for 10 seconds... 00:25:50.148 17:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.148 [2024-07-24 17:26:36.370898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.370962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.370989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.370997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.148 [2024-07-24 17:26:36.371398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371628] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.371720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464310 is same with the state(5) to be set 00:25:50.149 [2024-07-24 17:26:36.372103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.149 [2024-07-24 17:26:36.372578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.149 [2024-07-24 17:26:36.372590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.372983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.372992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:92240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.150 [2024-07-24 17:26:36.373155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.150 [2024-07-24 17:26:36.373440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.150 [2024-07-24 17:26:36.373449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.373982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.373991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.374011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.374030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.374050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:92872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.374070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:50.151 [2024-07-24 17:26:36.374090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:92272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:92304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.151 [2024-07-24 17:26:36.374265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.151 [2024-07-24 17:26:36.374276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:92416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:92424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:92432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:92440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:50.152 [2024-07-24 17:26:36.374819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1050 is same with the state(5) to be set 00:25:50.152 [2024-07-24 17:26:36.374842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:50.152 [2024-07-24 17:26:36.374850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:50.152 [2024-07-24 17:26:36.374858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:92528 len:8 PRP1 0x0 PRP2 0x0 00:25:50.152 [2024-07-24 17:26:36.374867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.374919] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfa1050 was disconnected and freed. reset controller. 00:25:50.152 [2024-07-24 17:26:36.374991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.152 [2024-07-24 17:26:36.375007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.375018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.152 [2024-07-24 17:26:36.375027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.152 [2024-07-24 17:26:36.375037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.152 [2024-07-24 17:26:36.375051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.153 [2024-07-24 17:26:36.375062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:50.153 [2024-07-24 17:26:36.375071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:50.153 [2024-07-24 17:26:36.375080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:50.153 [2024-07-24 17:26:36.375320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.153 [2024-07-24 17:26:36.375355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:50.153 [2024-07-24 17:26:36.375456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 17:26:36.375490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf20240 with addr=10.0.0.2, port=4420 00:25:50.153 [2024-07-24 17:26:36.375503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:50.153 [2024-07-24 17:26:36.375522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:50.153 [2024-07-24 17:26:36.375539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.153 [2024-07-24 17:26:36.375549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.153 [2024-07-24 17:26:36.375559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.153 [2024-07-24 17:26:36.375579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.411 [2024-07-24 17:26:36.390337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.411 17:26:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:25:51.346 [2024-07-24 17:26:37.390508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.346 [2024-07-24 17:26:37.390592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf20240 with addr=10.0.0.2, port=4420 00:25:51.346 [2024-07-24 17:26:37.390607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:51.346 [2024-07-24 17:26:37.390633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:51.346 [2024-07-24 17:26:37.390651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.346 [2024-07-24 17:26:37.390661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.346 [2024-07-24 17:26:37.390672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.346 [2024-07-24 17:26:37.390699] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.347 [2024-07-24 17:26:37.390711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.282 [2024-07-24 17:26:38.390832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.282 [2024-07-24 17:26:38.390913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf20240 with addr=10.0.0.2, port=4420 00:25:52.282 [2024-07-24 17:26:38.390928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:52.282 [2024-07-24 17:26:38.390952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:52.282 [2024-07-24 17:26:38.390970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.282 [2024-07-24 17:26:38.390979] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.282 [2024-07-24 17:26:38.390992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.282 [2024-07-24 17:26:38.391019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.282 [2024-07-24 17:26:38.391032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.218 [2024-07-24 17:26:39.391522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.218 [2024-07-24 17:26:39.391639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf20240 with addr=10.0.0.2, port=4420 00:25:53.218 [2024-07-24 17:26:39.391656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf20240 is same with the state(5) to be set 00:25:53.218 [2024-07-24 17:26:39.391970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf20240 (9): Bad file descriptor 00:25:53.219 [2024-07-24 17:26:39.392258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.219 [2024-07-24 17:26:39.392283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.219 [2024-07-24 17:26:39.392296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.219 [2024-07-24 17:26:39.396396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.219 [2024-07-24 17:26:39.396429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.219 17:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.477 [2024-07-24 17:26:39.649401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.477 17:26:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96009 00:25:54.413 [2024-07-24 17:26:40.430790] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:59.742 00:25:59.742 Latency(us) 00:25:59.742 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.743 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:59.743 Verification LBA range: start 0x0 length 0x4000 00:25:59.743 NVMe0n1 : 10.01 6141.13 23.99 4118.53 0.00 12451.66 536.20 3019898.88 00:25:59.743 =================================================================================================================== 00:25:59.743 Total : 6141.13 23.99 4118.53 0.00 12451.66 0.00 3019898.88 00:25:59.743 0 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 95838 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95838 ']' 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95838 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95838 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:25:59.743 killing process with pid 95838 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95838' 00:25:59.743 Received shutdown signal, test time was about 10.000000 seconds 00:25:59.743 00:25:59.743 Latency(us) 00:25:59.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.743 =================================================================================================================== 00:25:59.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95838 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95838 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96136 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96136 /var/tmp/bdevperf.sock 00:25:59.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96136 ']' 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.743 17:26:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:59.743 [2024-07-24 17:26:45.589741] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:25:59.743 [2024-07-24 17:26:45.590006] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96136 ] 00:25:59.743 [2024-07-24 17:26:45.729345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.743 [2024-07-24 17:26:45.828504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.679 17:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:00.679 17:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:26:00.679 17:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96136 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:00.679 17:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96160 00:26:00.679 17:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:00.679 17:26:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:00.937 NVMe0n1 00:26:01.196 17:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96219 00:26:01.196 17:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:01.196 17:26:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:01.196 Running I/O for 10 seconds... 00:26:02.132 17:26:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.394 [2024-07-24 17:26:48.443401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443508] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.394 [2024-07-24 17:26:48.443846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.443996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2467c80 is same with the state(5) to be set 00:26:02.395 [2024-07-24 17:26:48.444755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.395 [2024-07-24 17:26:48.444783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.395 [2024-07-24 17:26:48.444803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.395 [2024-07-24 17:26:48.444813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.395 [2024-07-24 17:26:48.444824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.395 [2024-07-24 17:26:48.444833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.395 [2024-07-24 17:26:48.444843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:48416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:52464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:51336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.444982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.444993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:52160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.396 [2024-07-24 17:26:48.445598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.396 [2024-07-24 17:26:48.445607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:89648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.445982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.445992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:114808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:26872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.397 [2024-07-24 17:26:48.446414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.397 [2024-07-24 17:26:48.446424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.446980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.446991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:31520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.398 [2024-07-24 17:26:48.447185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.398 [2024-07-24 17:26:48.447197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:52240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:02.399 [2024-07-24 17:26:48.447465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xffd8d0 is same with the state(5) to be set 00:26:02.399 [2024-07-24 17:26:48.447488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:02.399 [2024-07-24 17:26:48.447495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:02.399 [2024-07-24 17:26:48.447504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49904 len:8 PRP1 0x0 PRP2 0x0 00:26:02.399 [2024-07-24 17:26:48.447513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:02.399 [2024-07-24 17:26:48.447567] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xffd8d0 was disconnected and freed. reset controller. 00:26:02.399 [2024-07-24 17:26:48.447858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.399 [2024-07-24 17:26:48.447939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90240 (9): Bad file descriptor 00:26:02.399 [2024-07-24 17:26:48.448092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.399 [2024-07-24 17:26:48.448133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90240 with addr=10.0.0.2, port=4420 00:26:02.399 [2024-07-24 17:26:48.448146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90240 is same with the state(5) to be set 00:26:02.399 [2024-07-24 17:26:48.448173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90240 (9): Bad file descriptor 00:26:02.399 [2024-07-24 17:26:48.448191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.399 [2024-07-24 17:26:48.448201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.399 [2024-07-24 17:26:48.448211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.399 [2024-07-24 17:26:48.448232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.399 [2024-07-24 17:26:48.448243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.399 17:26:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96219 00:26:04.302 [2024-07-24 17:26:50.448419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.302 [2024-07-24 17:26:50.448486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90240 with addr=10.0.0.2, port=4420 00:26:04.302 [2024-07-24 17:26:50.448502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90240 is same with the state(5) to be set 00:26:04.302 [2024-07-24 17:26:50.448528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90240 (9): Bad file descriptor 00:26:04.302 [2024-07-24 17:26:50.448547] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.302 [2024-07-24 17:26:50.448556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.302 [2024-07-24 17:26:50.448567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.302 [2024-07-24 17:26:50.448592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.302 [2024-07-24 17:26:50.448602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.834 [2024-07-24 17:26:52.448760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.834 [2024-07-24 17:26:52.448827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf90240 with addr=10.0.0.2, port=4420 00:26:06.834 [2024-07-24 17:26:52.448845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90240 is same with the state(5) to be set 00:26:06.834 [2024-07-24 17:26:52.448869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90240 (9): Bad file descriptor 00:26:06.834 [2024-07-24 17:26:52.448899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:06.834 [2024-07-24 17:26:52.448909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:06.834 [2024-07-24 17:26:52.448920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:06.834 [2024-07-24 17:26:52.448944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.834 [2024-07-24 17:26:52.448955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.734 [2024-07-24 17:26:54.449024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.734 [2024-07-24 17:26:54.449082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:08.734 [2024-07-24 17:26:54.449111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:08.734 [2024-07-24 17:26:54.449150] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:26:08.734 [2024-07-24 17:26:54.449178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:09.302 00:26:09.302 Latency(us) 00:26:09.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.302 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:09.302 NVMe0n1 : 8.15 2887.27 11.28 15.71 0.00 44044.56 3321.48 7015926.69 00:26:09.302 =================================================================================================================== 00:26:09.302 Total : 2887.27 11.28 15.71 0.00 44044.56 3321.48 7015926.69 00:26:09.302 0 00:26:09.302 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:09.302 Attaching 5 probes... 00:26:09.302 1329.278132: reset bdev controller NVMe0 00:26:09.303 1329.410598: reconnect bdev controller NVMe0 00:26:09.303 3329.720024: reconnect delay bdev controller NVMe0 00:26:09.303 3329.754757: reconnect bdev controller NVMe0 00:26:09.303 5330.062723: reconnect delay bdev controller NVMe0 00:26:09.303 5330.098710: reconnect bdev controller NVMe0 00:26:09.303 7330.414770: reconnect delay bdev controller NVMe0 00:26:09.303 7330.450690: reconnect bdev controller NVMe0 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96160 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96136 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96136 ']' 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96136 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96136 00:26:09.303 killing process with pid 96136 00:26:09.303 Received shutdown signal, test time was about 8.206932 seconds 00:26:09.303 00:26:09.303 Latency(us) 00:26:09.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:09.303 =================================================================================================================== 00:26:09.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96136' 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96136 00:26:09.303 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96136 00:26:09.561 17:26:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.819 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:10.077 rmmod nvme_tcp 00:26:10.077 rmmod nvme_fabrics 00:26:10.077 rmmod nvme_keyring 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 95547 ']' 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 95547 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 95547 ']' 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 95547 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95547 00:26:10.077 killing process with pid 95547 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95547' 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 95547 00:26:10.077 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 95547 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:10.335 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:10.335 ************************************ 00:26:10.335 END TEST nvmf_timeout 00:26:10.335 ************************************ 00:26:10.335 00:26:10.335 real 0m47.298s 00:26:10.336 user 2m19.373s 00:26:10.336 sys 0m4.945s 00:26:10.336 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.336 17:26:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:10.336 17:26:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:26:10.336 17:26:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:26:10.336 ************************************ 00:26:10.336 END TEST nvmf_host 00:26:10.336 ************************************ 00:26:10.336 00:26:10.336 real 5m36.546s 00:26:10.336 user 14m34.832s 00:26:10.336 sys 1m2.629s 00:26:10.336 17:26:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.336 17:26:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:10.336 ************************************ 00:26:10.336 END TEST nvmf_tcp 00:26:10.336 ************************************ 00:26:10.336 00:26:10.336 real 15m39.797s 00:26:10.336 user 41m33.073s 00:26:10.336 sys 3m24.407s 00:26:10.336 17:26:56 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.336 17:26:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.336 17:26:56 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:26:10.336 17:26:56 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:10.336 17:26:56 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:10.336 17:26:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.336 17:26:56 -- common/autotest_common.sh@10 -- # set +x 00:26:10.336 ************************************ 00:26:10.336 START TEST spdkcli_nvmf_tcp 00:26:10.336 ************************************ 00:26:10.336 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:10.595 * Looking for test storage... 00:26:10.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=96436 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 96436 00:26:10.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 96436 ']' 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:10.595 17:26:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.595 [2024-07-24 17:26:56.686427] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:10.595 [2024-07-24 17:26:56.686524] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96436 ] 00:26:10.595 [2024-07-24 17:26:56.818325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:10.854 [2024-07-24 17:26:56.903669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.854 [2024-07-24 17:26:56.903696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.421 17:26:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:11.421 17:26:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:26:11.421 17:26:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:11.421 17:26:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:11.421 17:26:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:11.679 17:26:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:11.680 17:26:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:11.680 17:26:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:11.680 17:26:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:11.680 17:26:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:11.680 17:26:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:11.680 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:11.680 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:11.680 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:11.680 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:11.680 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:11.680 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:11.680 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:11.680 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:11.680 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:11.680 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:11.680 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:11.680 ' 00:26:14.213 [2024-07-24 17:27:00.358462] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.588 [2024-07-24 17:27:01.623437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:18.119 [2024-07-24 17:27:03.970310] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:20.024 [2024-07-24 17:27:05.991732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:21.401 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:21.401 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:21.401 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:21.401 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:21.401 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:21.401 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:21.401 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:21.401 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:21.401 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:21.401 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:21.401 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:21.401 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:21.659 17:27:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:26:21.917 17:27:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:22.176 17:27:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:22.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:22.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:22.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:22.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:22.176 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:22.176 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:22.176 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:22.176 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:22.176 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:22.176 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:22.176 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:22.176 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:22.176 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:22.176 ' 00:26:27.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:27.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:27.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:27.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:27.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:27.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:27.439 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:27.439 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:27.439 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:27.439 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:27.439 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:27.439 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:27.439 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:27.439 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:27.439 17:27:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:27.439 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:27.439 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 96436 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96436 ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96436 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96436 00:26:27.697 killing process with pid 96436 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96436' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 96436 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 96436 00:26:27.697 Process with pid 96436 is not found 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 96436 ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 96436 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 96436 ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 96436 00:26:27.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (96436) - No such process 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 96436 is not found' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:27.697 ************************************ 00:26:27.697 END TEST spdkcli_nvmf_tcp 00:26:27.697 ************************************ 00:26:27.697 00:26:27.697 real 0m17.416s 00:26:27.697 user 0m37.591s 00:26:27.697 sys 0m0.984s 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:27.697 17:27:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:27.956 17:27:13 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:27.956 17:27:13 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:27.956 17:27:13 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:27.956 17:27:13 -- common/autotest_common.sh@10 -- # set +x 00:26:27.956 ************************************ 00:26:27.956 START TEST nvmf_identify_passthru 00:26:27.956 ************************************ 00:26:27.956 17:27:13 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:27.956 * Looking for test storage... 00:26:27.956 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:27.956 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.956 17:27:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.956 17:27:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.956 17:27:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.956 17:27:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.956 17:27:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.956 17:27:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.956 17:27:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:27.956 17:27:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.956 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.957 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.957 17:27:14 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.957 17:27:14 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.957 17:27:14 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.957 17:27:14 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.957 17:27:14 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:27.957 17:27:14 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.957 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.957 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:27.957 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:27.957 Cannot find device "nvmf_tgt_br" 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.957 Cannot find device "nvmf_tgt_br2" 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:27.957 Cannot find device "nvmf_tgt_br" 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:27.957 Cannot find device "nvmf_tgt_br2" 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:26:27.957 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:28.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:28.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:28.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:26:28.222 00:26:28.222 --- 10.0.0.2 ping statistics --- 00:26:28.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.222 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:28.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:28.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:26:28.222 00:26:28.222 --- 10.0.0.3 ping statistics --- 00:26:28.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.222 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:28.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:26:28.222 00:26:28.222 --- 10.0.0.1 ping statistics --- 00:26:28.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.222 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.222 17:27:14 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.222 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:28.222 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:28.222 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:26:28.482 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 2 == 0 )) 00:26:28.482 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:28.482 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:28.482 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:28.740 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:26:28.740 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:28.740 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:28.741 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:28.741 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=96931 00:26:28.741 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:28.741 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.741 17:27:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 96931 00:26:28.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 96931 ']' 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:28.741 17:27:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:28.741 [2024-07-24 17:27:14.947179] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:28.741 [2024-07-24 17:27:14.947496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.999 [2024-07-24 17:27:15.088004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:28.999 [2024-07-24 17:27:15.190283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.999 [2024-07-24 17:27:15.190530] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.999 [2024-07-24 17:27:15.190685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.999 [2024-07-24 17:27:15.190735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.999 [2024-07-24 17:27:15.190828] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.999 [2024-07-24 17:27:15.190953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.999 [2024-07-24 17:27:15.191122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:28.999 [2024-07-24 17:27:15.191373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:28.999 [2024-07-24 17:27:15.191379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:26:29.965 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 [2024-07-24 17:27:15.951934] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 [2024-07-24 17:27:15.966401] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:29.965 17:27:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 Nvme0n1 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 [2024-07-24 17:27:16.113543] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.965 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.965 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:29.965 [ 00:26:29.965 { 00:26:29.965 "allow_any_host": true, 00:26:29.965 "hosts": [], 00:26:29.965 "listen_addresses": [], 00:26:29.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:29.965 "subtype": "Discovery" 00:26:29.965 }, 00:26:29.965 { 00:26:29.965 "allow_any_host": true, 00:26:29.965 "hosts": [], 00:26:29.965 "listen_addresses": [ 00:26:29.965 { 00:26:29.965 "adrfam": "IPv4", 00:26:29.965 "traddr": "10.0.0.2", 00:26:29.965 "trsvcid": "4420", 00:26:29.965 "trtype": "TCP" 00:26:29.965 } 00:26:29.965 ], 00:26:29.965 "max_cntlid": 65519, 00:26:29.965 "max_namespaces": 1, 00:26:29.965 "min_cntlid": 1, 00:26:29.965 "model_number": "SPDK bdev Controller", 00:26:29.965 "namespaces": [ 00:26:29.965 { 00:26:29.965 "bdev_name": "Nvme0n1", 00:26:29.965 "name": "Nvme0n1", 00:26:29.965 "nguid": "9E73B7D096514BE180389603115113E3", 00:26:29.965 "nsid": 1, 00:26:29.965 "uuid": "9e73b7d0-9651-4be1-8038-9603115113e3" 00:26:29.965 } 00:26:29.965 ], 00:26:29.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.965 "serial_number": "SPDK00000000000001", 00:26:29.965 "subtype": "NVMe" 00:26:29.965 } 00:26:29.965 ] 00:26:29.966 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.966 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:29.966 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:29.966 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:30.224 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:26:30.224 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:30.224 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:30.224 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:30.483 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:26:30.483 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.483 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:30.483 17:27:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:30.483 rmmod nvme_tcp 00:26:30.483 rmmod nvme_fabrics 00:26:30.483 rmmod nvme_keyring 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 96931 ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 96931 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 96931 ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 96931 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96931 00:26:30.483 killing process with pid 96931 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96931' 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 96931 00:26:30.483 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 96931 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.742 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:30.742 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:30.742 17:27:16 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:30.742 ************************************ 00:26:30.742 END TEST nvmf_identify_passthru 00:26:30.742 ************************************ 00:26:30.742 00:26:30.742 real 0m2.980s 00:26:30.742 user 0m7.292s 00:26:30.742 sys 0m0.799s 00:26:30.742 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:30.742 17:27:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:31.001 17:27:16 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:31.001 17:27:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:31.001 17:27:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:31.001 17:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:31.001 ************************************ 00:26:31.001 START TEST nvmf_dif 00:26:31.001 ************************************ 00:26:31.001 17:27:17 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:26:31.001 * Looking for test storage... 00:26:31.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:31.001 17:27:17 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:31.001 17:27:17 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:31.001 17:27:17 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:31.001 17:27:17 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:31.001 17:27:17 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.001 17:27:17 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.001 17:27:17 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.001 17:27:17 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:31.001 17:27:17 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:31.001 17:27:17 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:31.001 17:27:17 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:31.001 17:27:17 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:31.001 17:27:17 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:31.001 17:27:17 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:31.001 17:27:17 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:31.001 17:27:17 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:31.001 Cannot find device "nvmf_tgt_br" 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@155 -- # true 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:31.001 Cannot find device "nvmf_tgt_br2" 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@156 -- # true 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:31.001 Cannot find device "nvmf_tgt_br" 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@158 -- # true 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:31.001 Cannot find device "nvmf_tgt_br2" 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@159 -- # true 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:31.001 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.001 17:27:17 nvmf_dif -- nvmf/common.sh@162 -- # true 00:26:31.002 17:27:17 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:31.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:31.002 17:27:17 nvmf_dif -- nvmf/common.sh@163 -- # true 00:26:31.002 17:27:17 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:31.002 17:27:17 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:31.002 17:27:17 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:31.002 17:27:17 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:31.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:26:31.260 00:26:31.260 --- 10.0.0.2 ping statistics --- 00:26:31.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.260 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:31.260 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:31.260 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:26:31.260 00:26:31.260 --- 10.0.0.3 ping statistics --- 00:26:31.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.260 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:31.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:31.260 00:26:31.260 --- 10.0.0.1 ping statistics --- 00:26:31.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.260 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:31.260 17:27:17 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:31.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:31.519 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:31.519 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:31.777 17:27:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:31.777 17:27:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:31.777 17:27:17 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.777 17:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=97281 00:26:31.777 17:27:17 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 97281 00:26:31.778 17:27:17 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:31.778 17:27:17 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97281 ']' 00:26:31.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.778 17:27:17 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.778 17:27:17 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.778 17:27:17 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.778 17:27:17 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.778 17:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:31.778 [2024-07-24 17:27:17.852514] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:26:31.778 [2024-07-24 17:27:17.852614] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.778 [2024-07-24 17:27:17.990819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.037 [2024-07-24 17:27:18.108200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.037 [2024-07-24 17:27:18.108259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.037 [2024-07-24 17:27:18.108277] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.037 [2024-07-24 17:27:18.108287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.037 [2024-07-24 17:27:18.108296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.037 [2024-07-24 17:27:18.108326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:26:32.973 17:27:18 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 17:27:18 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.973 17:27:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:32.973 17:27:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 [2024-07-24 17:27:18.930689] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.973 17:27:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 ************************************ 00:26:32.973 START TEST fio_dif_1_default 00:26:32.973 ************************************ 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 bdev_null0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:32.973 [2024-07-24 17:27:18.982807] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:32.973 { 00:26:32.973 "params": { 00:26:32.973 "name": "Nvme$subsystem", 00:26:32.973 "trtype": "$TEST_TRANSPORT", 00:26:32.973 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:32.973 "adrfam": "ipv4", 00:26:32.973 "trsvcid": "$NVMF_PORT", 00:26:32.973 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:32.973 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:32.973 "hdgst": ${hdgst:-false}, 00:26:32.973 "ddgst": ${ddgst:-false} 00:26:32.973 }, 00:26:32.973 "method": "bdev_nvme_attach_controller" 00:26:32.973 } 00:26:32.973 EOF 00:26:32.973 )") 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.973 17:27:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:32.973 "params": { 00:26:32.973 "name": "Nvme0", 00:26:32.973 "trtype": "tcp", 00:26:32.973 "traddr": "10.0.0.2", 00:26:32.973 "adrfam": "ipv4", 00:26:32.973 "trsvcid": "4420", 00:26:32.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:32.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:32.973 "hdgst": false, 00:26:32.973 "ddgst": false 00:26:32.973 }, 00:26:32.973 "method": "bdev_nvme_attach_controller" 00:26:32.973 }' 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:32.973 17:27:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:32.973 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:32.973 fio-3.35 00:26:32.973 Starting 1 thread 00:26:45.180 00:26:45.180 filename0: (groupid=0, jobs=1): err= 0: pid=97360: Wed Jul 24 17:27:29 2024 00:26:45.180 read: IOPS=574, BW=2300KiB/s (2355kB/s)(22.5MiB/10033msec) 00:26:45.180 slat (nsec): min=5885, max=58504, avg=8290.30, stdev=4208.09 00:26:45.180 clat (usec): min=351, max=42009, avg=6932.32, stdev=14855.52 00:26:45.180 lat (usec): min=357, max=42019, avg=6940.61, stdev=14855.53 00:26:45.180 clat percentiles (usec): 00:26:45.180 | 1.00th=[ 363], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 404], 00:26:45.180 | 30.00th=[ 420], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 465], 00:26:45.180 | 70.00th=[ 490], 80.00th=[ 537], 90.00th=[40633], 95.00th=[41157], 00:26:45.180 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:26:45.180 | 99.99th=[42206] 00:26:45.180 bw ( KiB/s): min= 1440, max= 3072, per=100.00%, avg=2304.80, stdev=421.61, samples=20 00:26:45.180 iops : min= 360, max= 768, avg=576.40, stdev=105.38, samples=20 00:26:45.180 lat (usec) : 500=72.88%, 750=11.03% 00:26:45.180 lat (msec) : 4=0.07%, 50=16.02% 00:26:45.180 cpu : usr=91.71%, sys=7.69%, ctx=26, majf=0, minf=9 00:26:45.180 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:45.180 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.180 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.180 issued rwts: total=5768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.180 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:45.180 00:26:45.180 Run status group 0 (all jobs): 00:26:45.180 READ: bw=2300KiB/s (2355kB/s), 2300KiB/s-2300KiB/s (2355kB/s-2355kB/s), io=22.5MiB (23.6MB), run=10033-10033msec 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:45.180 17:27:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.180 ************************************ 00:26:45.180 END TEST fio_dif_1_default 00:26:45.180 ************************************ 00:26:45.180 00:26:45.180 real 0m11.052s 00:26:45.180 user 0m9.866s 00:26:45.180 sys 0m1.040s 00:26:45.180 17:27:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:45.180 17:27:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:45.180 17:27:30 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:45.180 17:27:30 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:45.181 17:27:30 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 ************************************ 00:26:45.181 START TEST fio_dif_1_multi_subsystems 00:26:45.181 ************************************ 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 bdev_null0 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 [2024-07-24 17:27:30.079066] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 bdev_null1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.181 { 00:26:45.181 "params": { 00:26:45.181 "name": "Nvme$subsystem", 00:26:45.181 "trtype": "$TEST_TRANSPORT", 00:26:45.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.181 "adrfam": "ipv4", 00:26:45.181 "trsvcid": "$NVMF_PORT", 00:26:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.181 "hdgst": ${hdgst:-false}, 00:26:45.181 "ddgst": ${ddgst:-false} 00:26:45.181 }, 00:26:45.181 "method": "bdev_nvme_attach_controller" 00:26:45.181 } 00:26:45.181 EOF 00:26:45.181 )") 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:45.181 { 00:26:45.181 "params": { 00:26:45.181 "name": "Nvme$subsystem", 00:26:45.181 "trtype": "$TEST_TRANSPORT", 00:26:45.181 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:45.181 "adrfam": "ipv4", 00:26:45.181 "trsvcid": "$NVMF_PORT", 00:26:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:45.181 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:45.181 "hdgst": ${hdgst:-false}, 00:26:45.181 "ddgst": ${ddgst:-false} 00:26:45.181 }, 00:26:45.181 "method": "bdev_nvme_attach_controller" 00:26:45.181 } 00:26:45.181 EOF 00:26:45.181 )") 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:45.181 "params": { 00:26:45.181 "name": "Nvme0", 00:26:45.181 "trtype": "tcp", 00:26:45.181 "traddr": "10.0.0.2", 00:26:45.181 "adrfam": "ipv4", 00:26:45.181 "trsvcid": "4420", 00:26:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:45.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:45.181 "hdgst": false, 00:26:45.181 "ddgst": false 00:26:45.181 }, 00:26:45.181 "method": "bdev_nvme_attach_controller" 00:26:45.181 },{ 00:26:45.181 "params": { 00:26:45.181 "name": "Nvme1", 00:26:45.181 "trtype": "tcp", 00:26:45.181 "traddr": "10.0.0.2", 00:26:45.181 "adrfam": "ipv4", 00:26:45.181 "trsvcid": "4420", 00:26:45.181 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.181 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:45.181 "hdgst": false, 00:26:45.181 "ddgst": false 00:26:45.181 }, 00:26:45.181 "method": "bdev_nvme_attach_controller" 00:26:45.181 }' 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.181 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:45.182 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:45.182 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:45.182 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:45.182 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:45.182 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:45.182 17:27:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:45.182 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:45.182 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:45.182 fio-3.35 00:26:45.182 Starting 2 threads 00:26:55.199 00:26:55.199 filename0: (groupid=0, jobs=1): err= 0: pid=97519: Wed Jul 24 17:27:41 2024 00:26:55.199 read: IOPS=177, BW=709KiB/s (726kB/s)(7104KiB/10024msec) 00:26:55.199 slat (nsec): min=6262, max=47834, avg=9572.09, stdev=4663.98 00:26:55.199 clat (usec): min=373, max=41869, avg=22546.78, stdev=20149.63 00:26:55.199 lat (usec): min=379, max=41885, avg=22556.35, stdev=20149.51 00:26:55.199 clat percentiles (usec): 00:26:55.199 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 461], 00:26:55.199 | 30.00th=[ 498], 40.00th=[ 537], 50.00th=[40633], 60.00th=[40633], 00:26:55.199 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:55.199 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:26:55.199 | 99.99th=[41681] 00:26:55.199 bw ( KiB/s): min= 544, max= 928, per=49.89%, avg=708.70, stdev=115.60, samples=20 00:26:55.199 iops : min= 136, max= 232, avg=177.15, stdev=28.86, samples=20 00:26:55.199 lat (usec) : 500=30.24%, 750=12.95%, 1000=2.08% 00:26:55.199 lat (msec) : 4=0.23%, 50=54.50% 00:26:55.199 cpu : usr=95.17%, sys=4.34%, ctx=8, majf=0, minf=0 00:26:55.199 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.199 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.199 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:55.199 filename1: (groupid=0, jobs=1): err= 0: pid=97520: Wed Jul 24 17:27:41 2024 00:26:55.199 read: IOPS=177, BW=710KiB/s (727kB/s)(7120KiB/10023msec) 00:26:55.199 slat (nsec): min=6284, max=67207, avg=8923.31, stdev=4186.60 00:26:55.199 clat (usec): min=376, max=42479, avg=22496.29, stdev=20159.71 00:26:55.199 lat (usec): min=383, max=42489, avg=22505.21, stdev=20159.72 00:26:55.199 clat percentiles (usec): 00:26:55.199 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 449], 00:26:55.199 | 30.00th=[ 482], 40.00th=[ 545], 50.00th=[40633], 60.00th=[41157], 00:26:55.199 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:55.199 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:26:55.199 | 99.99th=[42730] 00:26:55.199 bw ( KiB/s): min= 480, max= 928, per=50.04%, avg=710.35, stdev=99.93, samples=20 00:26:55.199 iops : min= 120, max= 232, avg=177.55, stdev=24.98, samples=20 00:26:55.199 lat (usec) : 500=35.28%, 750=7.87%, 1000=2.25% 00:26:55.199 lat (msec) : 4=0.22%, 50=54.38% 00:26:55.199 cpu : usr=94.74%, sys=4.85%, ctx=20, majf=0, minf=9 00:26:55.199 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.199 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.199 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:55.199 00:26:55.199 Run status group 0 (all jobs): 00:26:55.199 READ: bw=1419KiB/s (1453kB/s), 709KiB/s-710KiB/s (726kB/s-727kB/s), io=13.9MiB (14.6MB), run=10023-10024msec 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.199 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 00:26:55.200 real 0m11.211s 00:26:55.200 user 0m19.870s 00:26:55.200 sys 0m1.188s 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:55.200 ************************************ 00:26:55.200 17:27:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 END TEST fio_dif_1_multi_subsystems 00:26:55.200 ************************************ 00:26:55.200 17:27:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:55.200 17:27:41 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:55.200 17:27:41 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 ************************************ 00:26:55.200 START TEST fio_dif_rand_params 00:26:55.200 ************************************ 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 bdev_null0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.200 [2024-07-24 17:27:41.338665] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.200 { 00:26:55.200 "params": { 00:26:55.200 "name": "Nvme$subsystem", 00:26:55.200 "trtype": "$TEST_TRANSPORT", 00:26:55.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.200 "adrfam": "ipv4", 00:26:55.200 "trsvcid": "$NVMF_PORT", 00:26:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.200 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.200 "hdgst": ${hdgst:-false}, 00:26:55.200 "ddgst": ${ddgst:-false} 00:26:55.200 }, 00:26:55.200 "method": "bdev_nvme_attach_controller" 00:26:55.200 } 00:26:55.200 EOF 00:26:55.200 )") 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:55.200 "params": { 00:26:55.200 "name": "Nvme0", 00:26:55.200 "trtype": "tcp", 00:26:55.200 "traddr": "10.0.0.2", 00:26:55.200 "adrfam": "ipv4", 00:26:55.200 "trsvcid": "4420", 00:26:55.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.200 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:55.200 "hdgst": false, 00:26:55.200 "ddgst": false 00:26:55.200 }, 00:26:55.200 "method": "bdev_nvme_attach_controller" 00:26:55.200 }' 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:55.200 17:27:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.459 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:55.459 ... 00:26:55.459 fio-3.35 00:26:55.459 Starting 3 threads 00:27:02.023 00:27:02.023 filename0: (groupid=0, jobs=1): err= 0: pid=97676: Wed Jul 24 17:27:47 2024 00:27:02.023 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(170MiB/5004msec) 00:27:02.023 slat (nsec): min=6639, max=57972, avg=13327.98, stdev=6144.34 00:27:02.023 clat (usec): min=4171, max=53197, avg=11026.32, stdev=5534.48 00:27:02.023 lat (usec): min=4182, max=53212, avg=11039.65, stdev=5534.64 00:27:02.023 clat percentiles (usec): 00:27:02.023 | 1.00th=[ 6128], 5.00th=[ 7373], 10.00th=[ 8291], 20.00th=[ 9503], 00:27:02.023 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10945], 00:27:02.023 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:27:02.023 | 99.00th=[51119], 99.50th=[51643], 99.90th=[52691], 99.95th=[53216], 00:27:02.023 | 99.99th=[53216] 00:27:02.023 bw ( KiB/s): min=25344, max=41216, per=36.78%, avg=34474.67, stdev=4389.49, samples=9 00:27:02.023 iops : min= 198, max= 322, avg=269.33, stdev=34.29, samples=9 00:27:02.023 lat (msec) : 10=30.61%, 20=67.62%, 50=0.29%, 100=1.47% 00:27:02.023 cpu : usr=91.76%, sys=6.34%, ctx=18, majf=0, minf=0 00:27:02.023 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:02.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.023 issued rwts: total=1359,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:02.023 filename0: (groupid=0, jobs=1): err= 0: pid=97677: Wed Jul 24 17:27:47 2024 00:27:02.023 read: IOPS=219, BW=27.4MiB/s (28.8MB/s)(137MiB/5004msec) 00:27:02.023 slat (nsec): min=6515, max=59714, avg=11196.56, stdev=6142.48 00:27:02.023 clat (usec): min=4072, max=50575, avg=13642.01, stdev=3858.63 00:27:02.023 lat (usec): min=4079, max=50585, avg=13653.21, stdev=3858.82 00:27:02.023 clat percentiles (usec): 00:27:02.023 | 1.00th=[ 4228], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[12256], 00:27:02.023 | 30.00th=[13566], 40.00th=[13960], 50.00th=[14484], 60.00th=[14746], 00:27:02.023 | 70.00th=[15139], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:27:02.023 | 99.00th=[16909], 99.50th=[47449], 99.90th=[49021], 99.95th=[50594], 00:27:02.023 | 99.99th=[50594] 00:27:02.023 bw ( KiB/s): min=25344, max=36864, per=29.91%, avg=28037.00, stdev=3529.16, samples=10 00:27:02.023 iops : min= 198, max= 288, avg=219.00, stdev=27.60, samples=10 00:27:02.023 lat (msec) : 10=15.66%, 20=83.79%, 50=0.46%, 100=0.09% 00:27:02.023 cpu : usr=92.64%, sys=5.74%, ctx=8, majf=0, minf=0 00:27:02.023 IO depths : 1=32.1%, 2=67.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:02.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.023 issued rwts: total=1098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:02.023 filename0: (groupid=0, jobs=1): err= 0: pid=97678: Wed Jul 24 17:27:47 2024 00:27:02.023 read: IOPS=241, BW=30.2MiB/s (31.6MB/s)(151MiB/5003msec) 00:27:02.023 slat (nsec): min=6520, max=65340, avg=11492.16, stdev=5739.41 00:27:02.023 clat (usec): min=4074, max=54285, avg=12411.55, stdev=6591.75 00:27:02.023 lat (usec): min=4084, max=54314, avg=12423.04, stdev=6591.81 00:27:02.023 clat percentiles (usec): 00:27:02.023 | 1.00th=[ 4817], 5.00th=[ 7701], 10.00th=[ 9634], 20.00th=[10814], 00:27:02.023 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:27:02.023 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:27:02.023 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:27:02.023 | 99.99th=[54264] 00:27:02.023 bw ( KiB/s): min=26880, max=36168, per=32.63%, avg=30585.78, stdev=2972.27, samples=9 00:27:02.023 iops : min= 210, max= 282, avg=238.89, stdev=23.09, samples=9 00:27:02.023 lat (msec) : 10=11.18%, 20=86.33%, 50=0.25%, 100=2.24% 00:27:02.023 cpu : usr=92.06%, sys=6.38%, ctx=77, majf=0, minf=0 00:27:02.023 IO depths : 1=11.3%, 2=88.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:02.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:02.023 issued rwts: total=1207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:02.023 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:02.023 00:27:02.023 Run status group 0 (all jobs): 00:27:02.023 READ: bw=91.5MiB/s (96.0MB/s), 27.4MiB/s-33.9MiB/s (28.8MB/s-35.6MB/s), io=458MiB (480MB), run=5003-5004msec 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.023 bdev_null0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.023 [2024-07-24 17:27:47.391603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:02.023 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 bdev_null1 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 bdev_null2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:02.024 { 00:27:02.024 "params": { 00:27:02.024 "name": "Nvme$subsystem", 00:27:02.024 "trtype": "$TEST_TRANSPORT", 00:27:02.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.024 "adrfam": "ipv4", 00:27:02.024 "trsvcid": "$NVMF_PORT", 00:27:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.024 "hdgst": ${hdgst:-false}, 00:27:02.024 "ddgst": ${ddgst:-false} 00:27:02.024 }, 00:27:02.024 "method": "bdev_nvme_attach_controller" 00:27:02.024 } 00:27:02.024 EOF 00:27:02.024 )") 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:02.024 { 00:27:02.024 "params": { 00:27:02.024 "name": "Nvme$subsystem", 00:27:02.024 "trtype": "$TEST_TRANSPORT", 00:27:02.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.024 "adrfam": "ipv4", 00:27:02.024 "trsvcid": "$NVMF_PORT", 00:27:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.024 "hdgst": ${hdgst:-false}, 00:27:02.024 "ddgst": ${ddgst:-false} 00:27:02.024 }, 00:27:02.024 "method": "bdev_nvme_attach_controller" 00:27:02.024 } 00:27:02.024 EOF 00:27:02.024 )") 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:02.024 { 00:27:02.024 "params": { 00:27:02.024 "name": "Nvme$subsystem", 00:27:02.024 "trtype": "$TEST_TRANSPORT", 00:27:02.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:02.024 "adrfam": "ipv4", 00:27:02.024 "trsvcid": "$NVMF_PORT", 00:27:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:02.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:02.024 "hdgst": ${hdgst:-false}, 00:27:02.024 "ddgst": ${ddgst:-false} 00:27:02.024 }, 00:27:02.024 "method": "bdev_nvme_attach_controller" 00:27:02.024 } 00:27:02.024 EOF 00:27:02.024 )") 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:02.024 17:27:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:02.024 "params": { 00:27:02.024 "name": "Nvme0", 00:27:02.024 "trtype": "tcp", 00:27:02.024 "traddr": "10.0.0.2", 00:27:02.024 "adrfam": "ipv4", 00:27:02.024 "trsvcid": "4420", 00:27:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:02.024 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:02.024 "hdgst": false, 00:27:02.024 "ddgst": false 00:27:02.024 }, 00:27:02.024 "method": "bdev_nvme_attach_controller" 00:27:02.024 },{ 00:27:02.024 "params": { 00:27:02.024 "name": "Nvme1", 00:27:02.024 "trtype": "tcp", 00:27:02.024 "traddr": "10.0.0.2", 00:27:02.024 "adrfam": "ipv4", 00:27:02.024 "trsvcid": "4420", 00:27:02.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:02.024 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:02.024 "hdgst": false, 00:27:02.025 "ddgst": false 00:27:02.025 }, 00:27:02.025 "method": "bdev_nvme_attach_controller" 00:27:02.025 },{ 00:27:02.025 "params": { 00:27:02.025 "name": "Nvme2", 00:27:02.025 "trtype": "tcp", 00:27:02.025 "traddr": "10.0.0.2", 00:27:02.025 "adrfam": "ipv4", 00:27:02.025 "trsvcid": "4420", 00:27:02.025 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:02.025 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:02.025 "hdgst": false, 00:27:02.025 "ddgst": false 00:27:02.025 }, 00:27:02.025 "method": "bdev_nvme_attach_controller" 00:27:02.025 }' 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:02.025 17:27:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:02.025 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:02.025 ... 00:27:02.025 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:02.025 ... 00:27:02.025 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:02.025 ... 00:27:02.025 fio-3.35 00:27:02.025 Starting 24 threads 00:27:14.238 00:27:14.238 filename0: (groupid=0, jobs=1): err= 0: pid=97779: Wed Jul 24 17:27:58 2024 00:27:14.238 read: IOPS=207, BW=828KiB/s (848kB/s)(8324KiB/10051msec) 00:27:14.238 slat (usec): min=7, max=179, avg=10.44, stdev= 5.11 00:27:14.238 clat (msec): min=32, max=158, avg=76.99, stdev=22.83 00:27:14.238 lat (msec): min=32, max=158, avg=77.00, stdev=22.83 00:27:14.238 clat percentiles (msec): 00:27:14.238 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 58], 00:27:14.238 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:27:14.238 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:27:14.238 | 99.00th=[ 142], 99.50th=[ 159], 99.90th=[ 159], 99.95th=[ 159], 00:27:14.238 | 99.99th=[ 159] 00:27:14.238 bw ( KiB/s): min= 512, max= 1104, per=4.47%, avg=830.00, stdev=124.34, samples=20 00:27:14.238 iops : min= 128, max= 276, avg=207.50, stdev=31.08, samples=20 00:27:14.238 lat (msec) : 50=15.43%, 100=70.06%, 250=14.51% 00:27:14.238 cpu : usr=38.26%, sys=1.10%, ctx=1128, majf=0, minf=9 00:27:14.238 IO depths : 1=0.9%, 2=1.9%, 4=8.6%, 8=75.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:27:14.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.238 complete : 0=0.0%, 4=89.4%, 8=6.4%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.238 issued rwts: total=2081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.238 filename0: (groupid=0, jobs=1): err= 0: pid=97780: Wed Jul 24 17:27:58 2024 00:27:14.238 read: IOPS=183, BW=735KiB/s (752kB/s)(7380KiB/10043msec) 00:27:14.238 slat (nsec): min=6891, max=24875, avg=10211.77, stdev=3500.92 00:27:14.238 clat (msec): min=17, max=183, avg=86.93, stdev=30.04 00:27:14.238 lat (msec): min=17, max=183, avg=86.94, stdev=30.04 00:27:14.238 clat percentiles (msec): 00:27:14.238 | 1.00th=[ 35], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 61], 00:27:14.238 | 30.00th=[ 69], 40.00th=[ 77], 50.00th=[ 86], 60.00th=[ 96], 00:27:14.238 | 70.00th=[ 101], 80.00th=[ 108], 90.00th=[ 125], 95.00th=[ 144], 00:27:14.238 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 184], 99.95th=[ 184], 00:27:14.238 | 99.99th=[ 184] 00:27:14.238 bw ( KiB/s): min= 464, max= 1200, per=3.94%, avg=731.60, stdev=202.74, samples=20 00:27:14.238 iops : min= 116, max= 300, avg=182.90, stdev=50.69, samples=20 00:27:14.238 lat (msec) : 20=0.87%, 50=14.09%, 100=53.71%, 250=31.33% 00:27:14.238 cpu : usr=37.29%, sys=1.00%, ctx=1085, majf=0, minf=9 00:27:14.238 IO depths : 1=1.6%, 2=3.5%, 4=11.4%, 8=71.5%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:14.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.238 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.238 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.238 filename0: (groupid=0, jobs=1): err= 0: pid=97781: Wed Jul 24 17:27:58 2024 00:27:14.238 read: IOPS=183, BW=735KiB/s (752kB/s)(7372KiB/10033msec) 00:27:14.238 slat (nsec): min=4954, max=35682, avg=10426.82, stdev=3416.82 00:27:14.238 clat (msec): min=33, max=155, avg=87.00, stdev=25.14 00:27:14.238 lat (msec): min=33, max=155, avg=87.01, stdev=25.13 00:27:14.238 clat percentiles (msec): 00:27:14.238 | 1.00th=[ 38], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 65], 00:27:14.238 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:27:14.238 | 70.00th=[ 104], 80.00th=[ 109], 90.00th=[ 118], 95.00th=[ 130], 00:27:14.238 | 99.00th=[ 153], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 157], 00:27:14.238 | 99.99th=[ 157] 00:27:14.238 bw ( KiB/s): min= 512, max= 1072, per=3.94%, avg=730.80, stdev=130.15, samples=20 00:27:14.238 iops : min= 128, max= 268, avg=182.70, stdev=32.54, samples=20 00:27:14.238 lat (msec) : 50=8.68%, 100=58.22%, 250=33.10% 00:27:14.238 cpu : usr=39.56%, sys=0.96%, ctx=1171, majf=0, minf=9 00:27:14.238 IO depths : 1=2.0%, 2=4.2%, 4=12.6%, 8=69.8%, 16=11.4%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=1843,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename0: (groupid=0, jobs=1): err= 0: pid=97782: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=167, BW=671KiB/s (688kB/s)(6724KiB/10015msec) 00:27:14.239 slat (usec): min=7, max=8020, avg=25.35, stdev=325.66 00:27:14.239 clat (msec): min=34, max=202, avg=95.12, stdev=27.45 00:27:14.239 lat (msec): min=34, max=202, avg=95.14, stdev=27.45 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 37], 5.00th=[ 57], 10.00th=[ 66], 20.00th=[ 72], 00:27:14.239 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 94], 60.00th=[ 103], 00:27:14.239 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 136], 95.00th=[ 144], 00:27:14.239 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 203], 99.95th=[ 203], 00:27:14.239 | 99.99th=[ 203] 00:27:14.239 bw ( KiB/s): min= 512, max= 896, per=3.60%, avg=668.00, stdev=101.57, samples=20 00:27:14.239 iops : min= 128, max= 224, avg=167.00, stdev=25.39, samples=20 00:27:14.239 lat (msec) : 50=3.99%, 100=55.62%, 250=40.39% 00:27:14.239 cpu : usr=37.88%, sys=1.09%, ctx=1096, majf=0, minf=9 00:27:14.239 IO depths : 1=2.9%, 2=6.4%, 4=16.5%, 8=64.4%, 16=9.9%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=1681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename0: (groupid=0, jobs=1): err= 0: pid=97783: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=190, BW=763KiB/s (781kB/s)(7644KiB/10018msec) 00:27:14.239 slat (nsec): min=4843, max=38041, avg=10711.78, stdev=3706.23 00:27:14.239 clat (msec): min=32, max=187, avg=83.81, stdev=27.39 00:27:14.239 lat (msec): min=32, max=187, avg=83.82, stdev=27.39 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 55], 20.00th=[ 63], 00:27:14.239 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:27:14.239 | 70.00th=[ 95], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 133], 00:27:14.239 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 188], 00:27:14.239 | 99.99th=[ 188] 00:27:14.239 bw ( KiB/s): min= 472, max= 1000, per=4.08%, avg=757.70, stdev=158.38, samples=20 00:27:14.239 iops : min= 118, max= 250, avg=189.40, stdev=39.60, samples=20 00:27:14.239 lat (msec) : 50=8.69%, 100=65.72%, 250=25.59% 00:27:14.239 cpu : usr=38.99%, sys=1.14%, ctx=1103, majf=0, minf=9 00:27:14.239 IO depths : 1=1.1%, 2=2.6%, 4=10.6%, 8=73.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename0: (groupid=0, jobs=1): err= 0: pid=97784: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=187, BW=751KiB/s (769kB/s)(7528KiB/10024msec) 00:27:14.239 slat (usec): min=4, max=8021, avg=19.03, stdev=261.04 00:27:14.239 clat (msec): min=34, max=185, avg=85.05, stdev=26.02 00:27:14.239 lat (msec): min=34, max=185, avg=85.07, stdev=26.03 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 64], 00:27:14.239 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 85], 00:27:14.239 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 136], 00:27:14.239 | 99.00th=[ 157], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 186], 00:27:14.239 | 99.99th=[ 186] 00:27:14.239 bw ( KiB/s): min= 512, max= 920, per=4.02%, avg=746.30, stdev=134.64, samples=20 00:27:14.239 iops : min= 128, max= 230, avg=186.55, stdev=33.67, samples=20 00:27:14.239 lat (msec) : 50=6.85%, 100=68.49%, 250=24.65% 00:27:14.239 cpu : usr=32.25%, sys=0.93%, ctx=854, majf=0, minf=9 00:27:14.239 IO depths : 1=1.1%, 2=2.3%, 4=9.9%, 8=74.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=90.0%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=1882,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename0: (groupid=0, jobs=1): err= 0: pid=97785: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=166, BW=664KiB/s (680kB/s)(6652KiB/10018msec) 00:27:14.239 slat (usec): min=4, max=8017, avg=15.15, stdev=196.37 00:27:14.239 clat (msec): min=33, max=177, avg=96.25, stdev=28.48 00:27:14.239 lat (msec): min=33, max=177, avg=96.27, stdev=28.48 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 47], 5.00th=[ 54], 10.00th=[ 62], 20.00th=[ 72], 00:27:14.239 | 30.00th=[ 80], 40.00th=[ 86], 50.00th=[ 95], 60.00th=[ 105], 00:27:14.239 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 136], 95.00th=[ 155], 00:27:14.239 | 99.00th=[ 169], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 178], 00:27:14.239 | 99.99th=[ 178] 00:27:14.239 bw ( KiB/s): min= 440, max= 896, per=3.56%, avg=660.15, stdev=132.34, samples=20 00:27:14.239 iops : min= 110, max= 224, avg=165.00, stdev=33.08, samples=20 00:27:14.239 lat (msec) : 50=4.33%, 100=52.80%, 250=42.87% 00:27:14.239 cpu : usr=32.39%, sys=0.79%, ctx=863, majf=0, minf=9 00:27:14.239 IO depths : 1=2.8%, 2=5.8%, 4=14.8%, 8=66.5%, 16=10.0%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=91.3%, 8=3.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=1663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename0: (groupid=0, jobs=1): err= 0: pid=97786: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=170, BW=681KiB/s (697kB/s)(6820KiB/10013msec) 00:27:14.239 slat (nsec): min=7539, max=38747, avg=10454.45, stdev=3558.01 00:27:14.239 clat (msec): min=17, max=192, avg=93.89, stdev=29.27 00:27:14.239 lat (msec): min=17, max=192, avg=93.90, stdev=29.27 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 36], 5.00th=[ 56], 10.00th=[ 61], 20.00th=[ 72], 00:27:14.239 | 30.00th=[ 72], 40.00th=[ 84], 50.00th=[ 94], 60.00th=[ 96], 00:27:14.239 | 70.00th=[ 108], 80.00th=[ 117], 90.00th=[ 140], 95.00th=[ 146], 00:27:14.239 | 99.00th=[ 171], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:27:14.239 | 99.99th=[ 192] 00:27:14.239 bw ( KiB/s): min= 424, max= 888, per=3.64%, avg=675.60, stdev=128.01, samples=20 00:27:14.239 iops : min= 106, max= 222, avg=168.90, stdev=32.00, samples=20 00:27:14.239 lat (msec) : 20=0.35%, 50=3.58%, 100=62.52%, 250=33.55% 00:27:14.239 cpu : usr=32.36%, sys=0.84%, ctx=841, majf=0, minf=9 00:27:14.239 IO depths : 1=2.4%, 2=5.3%, 4=14.4%, 8=66.9%, 16=11.0%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=91.2%, 8=3.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=1705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename1: (groupid=0, jobs=1): err= 0: pid=97787: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=223, BW=894KiB/s (916kB/s)(9000KiB/10066msec) 00:27:14.239 slat (nsec): min=3890, max=39729, avg=9470.43, stdev=2877.69 00:27:14.239 clat (usec): min=1339, max=143028, avg=71338.10, stdev=26916.46 00:27:14.239 lat (usec): min=1347, max=143047, avg=71347.57, stdev=26916.60 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 3], 5.00th=[ 24], 10.00th=[ 42], 20.00th=[ 51], 00:27:14.239 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 77], 00:27:14.239 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 118], 00:27:14.239 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 144], 99.95th=[ 144], 00:27:14.239 | 99.99th=[ 144] 00:27:14.239 bw ( KiB/s): min= 640, max= 2112, per=4.82%, avg=893.60, stdev=309.74, samples=20 00:27:14.239 iops : min= 160, max= 528, avg=223.40, stdev=77.44, samples=20 00:27:14.239 lat (msec) : 2=0.62%, 4=1.82%, 10=1.51%, 20=0.62%, 50=14.80% 00:27:14.239 lat (msec) : 100=67.73%, 250=12.89% 00:27:14.239 cpu : usr=39.16%, sys=1.13%, ctx=1562, majf=0, minf=9 00:27:14.239 IO depths : 1=0.8%, 2=2.0%, 4=8.7%, 8=75.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename1: (groupid=0, jobs=1): err= 0: pid=97788: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=219, BW=878KiB/s (899kB/s)(8816KiB/10040msec) 00:27:14.239 slat (usec): min=4, max=4017, avg=11.73, stdev=85.42 00:27:14.239 clat (msec): min=32, max=140, avg=72.74, stdev=21.16 00:27:14.239 lat (msec): min=32, max=140, avg=72.75, stdev=21.16 00:27:14.239 clat percentiles (msec): 00:27:14.239 | 1.00th=[ 36], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 56], 00:27:14.239 | 30.00th=[ 62], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 74], 00:27:14.239 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 104], 95.00th=[ 110], 00:27:14.239 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 142], 99.95th=[ 142], 00:27:14.239 | 99.99th=[ 142] 00:27:14.239 bw ( KiB/s): min= 600, max= 1248, per=4.72%, avg=875.20, stdev=155.11, samples=20 00:27:14.239 iops : min= 150, max= 312, avg=218.80, stdev=38.78, samples=20 00:27:14.239 lat (msec) : 50=17.01%, 100=69.87%, 250=13.11% 00:27:14.239 cpu : usr=37.56%, sys=1.09%, ctx=1089, majf=0, minf=9 00:27:14.239 IO depths : 1=0.8%, 2=1.7%, 4=7.8%, 8=77.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:27:14.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 complete : 0=0.0%, 4=89.5%, 8=5.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.239 issued rwts: total=2204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.239 filename1: (groupid=0, jobs=1): err= 0: pid=97789: Wed Jul 24 17:27:58 2024 00:27:14.239 read: IOPS=205, BW=821KiB/s (840kB/s)(8252KiB/10056msec) 00:27:14.239 slat (usec): min=3, max=8016, avg=19.98, stdev=233.25 00:27:14.239 clat (msec): min=7, max=147, avg=77.77, stdev=25.40 00:27:14.239 lat (msec): min=7, max=147, avg=77.79, stdev=25.40 00:27:14.239 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 10], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 60], 00:27:14.240 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 84], 00:27:14.240 | 70.00th=[ 91], 80.00th=[ 100], 90.00th=[ 110], 95.00th=[ 121], 00:27:14.240 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:27:14.240 | 99.99th=[ 148] 00:27:14.240 bw ( KiB/s): min= 592, max= 1280, per=4.41%, avg=818.80, stdev=155.83, samples=20 00:27:14.240 iops : min= 148, max= 320, avg=204.70, stdev=38.96, samples=20 00:27:14.240 lat (msec) : 10=1.55%, 20=0.68%, 50=12.51%, 100=65.58%, 250=19.68% 00:27:14.240 cpu : usr=39.35%, sys=1.05%, ctx=1191, majf=0, minf=9 00:27:14.240 IO depths : 1=0.9%, 2=1.8%, 4=9.7%, 8=74.7%, 16=12.8%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=2063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename1: (groupid=0, jobs=1): err= 0: pid=97790: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=209, BW=837KiB/s (857kB/s)(8412KiB/10051msec) 00:27:14.240 slat (usec): min=4, max=4017, avg=16.24, stdev=145.71 00:27:14.240 clat (msec): min=30, max=174, avg=76.28, stdev=24.12 00:27:14.240 lat (msec): min=30, max=174, avg=76.30, stdev=24.12 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 32], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 57], 00:27:14.240 | 30.00th=[ 63], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:27:14.240 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 118], 00:27:14.240 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 176], 99.95th=[ 176], 00:27:14.240 | 99.99th=[ 176] 00:27:14.240 bw ( KiB/s): min= 640, max= 1152, per=4.50%, avg=834.80, stdev=145.89, samples=20 00:27:14.240 iops : min= 160, max= 288, avg=208.70, stdev=36.47, samples=20 00:27:14.240 lat (msec) : 50=14.98%, 100=65.48%, 250=19.54% 00:27:14.240 cpu : usr=46.38%, sys=1.13%, ctx=1592, majf=0, minf=9 00:27:14.240 IO depths : 1=1.1%, 2=2.3%, 4=8.7%, 8=75.6%, 16=12.4%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=89.6%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=2103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename1: (groupid=0, jobs=1): err= 0: pid=97791: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=216, BW=868KiB/s (889kB/s)(8728KiB/10057msec) 00:27:14.240 slat (usec): min=6, max=8027, avg=28.00, stdev=332.48 00:27:14.240 clat (msec): min=4, max=167, avg=73.43, stdev=24.81 00:27:14.240 lat (msec): min=4, max=167, avg=73.46, stdev=24.83 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 12], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 55], 00:27:14.240 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:27:14.240 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 108], 95.00th=[ 120], 00:27:14.240 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:27:14.240 | 99.99th=[ 169] 00:27:14.240 bw ( KiB/s): min= 640, max= 1408, per=4.67%, avg=866.40, stdev=206.78, samples=20 00:27:14.240 iops : min= 160, max= 352, avg=216.60, stdev=51.70, samples=20 00:27:14.240 lat (msec) : 10=0.73%, 20=1.47%, 50=13.66%, 100=70.62%, 250=13.52% 00:27:14.240 cpu : usr=40.05%, sys=1.24%, ctx=1181, majf=0, minf=9 00:27:14.240 IO depths : 1=1.9%, 2=3.9%, 4=12.9%, 8=70.1%, 16=11.2%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename1: (groupid=0, jobs=1): err= 0: pid=97792: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=178, BW=712KiB/s (729kB/s)(7136KiB/10018msec) 00:27:14.240 slat (nsec): min=6603, max=33209, avg=10289.62, stdev=3568.71 00:27:14.240 clat (msec): min=35, max=177, avg=89.75, stdev=30.65 00:27:14.240 lat (msec): min=35, max=177, avg=89.76, stdev=30.65 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 37], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 61], 00:27:14.240 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 88], 60.00th=[ 101], 00:27:14.240 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 144], 00:27:14.240 | 99.00th=[ 161], 99.50th=[ 174], 99.90th=[ 178], 99.95th=[ 178], 00:27:14.240 | 99.99th=[ 178] 00:27:14.240 bw ( KiB/s): min= 504, max= 1024, per=3.81%, avg=706.95, stdev=153.72, samples=20 00:27:14.240 iops : min= 126, max= 256, avg=176.70, stdev=38.43, samples=20 00:27:14.240 lat (msec) : 50=14.01%, 100=44.73%, 250=41.26% 00:27:14.240 cpu : usr=35.41%, sys=0.75%, ctx=957, majf=0, minf=9 00:27:14.240 IO depths : 1=1.8%, 2=3.9%, 4=12.4%, 8=70.2%, 16=11.7%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=90.8%, 8=4.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename1: (groupid=0, jobs=1): err= 0: pid=97793: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=204, BW=820KiB/s (839kB/s)(8232KiB/10045msec) 00:27:14.240 slat (usec): min=4, max=4020, avg=13.18, stdev=99.06 00:27:14.240 clat (msec): min=32, max=179, avg=77.93, stdev=24.39 00:27:14.240 lat (msec): min=32, max=179, avg=77.94, stdev=24.39 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 60], 00:27:14.240 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:27:14.240 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 125], 00:27:14.240 | 99.00th=[ 144], 99.50th=[ 150], 99.90th=[ 180], 99.95th=[ 180], 00:27:14.240 | 99.99th=[ 180] 00:27:14.240 bw ( KiB/s): min= 608, max= 1120, per=4.40%, avg=816.50, stdev=150.26, samples=20 00:27:14.240 iops : min= 152, max= 280, avg=204.10, stdev=37.54, samples=20 00:27:14.240 lat (msec) : 50=12.68%, 100=71.48%, 250=15.84% 00:27:14.240 cpu : usr=38.98%, sys=0.80%, ctx=1194, majf=0, minf=9 00:27:14.240 IO depths : 1=0.3%, 2=0.6%, 4=5.6%, 8=79.5%, 16=13.9%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=89.1%, 8=7.0%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=2058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename1: (groupid=0, jobs=1): err= 0: pid=97794: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=208, BW=834KiB/s (854kB/s)(8380KiB/10051msec) 00:27:14.240 slat (usec): min=6, max=8018, avg=21.66, stdev=302.91 00:27:14.240 clat (msec): min=11, max=155, avg=76.54, stdev=26.40 00:27:14.240 lat (msec): min=11, max=155, avg=76.56, stdev=26.40 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 17], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:27:14.240 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 82], 00:27:14.240 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 123], 00:27:14.240 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:27:14.240 | 99.99th=[ 157] 00:27:14.240 bw ( KiB/s): min= 592, max= 1362, per=4.48%, avg=831.70, stdev=200.13, samples=20 00:27:14.240 iops : min= 148, max= 340, avg=207.90, stdev=49.96, samples=20 00:27:14.240 lat (msec) : 20=1.10%, 50=19.14%, 100=61.10%, 250=18.66% 00:27:14.240 cpu : usr=32.39%, sys=0.93%, ctx=855, majf=0, minf=9 00:27:14.240 IO depths : 1=0.4%, 2=1.3%, 4=8.1%, 8=76.7%, 16=13.5%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename2: (groupid=0, jobs=1): err= 0: pid=97795: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=190, BW=762KiB/s (780kB/s)(7660KiB/10051msec) 00:27:14.240 slat (usec): min=7, max=8026, avg=18.62, stdev=258.89 00:27:14.240 clat (msec): min=21, max=189, avg=83.77, stdev=28.51 00:27:14.240 lat (msec): min=21, max=189, avg=83.79, stdev=28.52 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 34], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 61], 00:27:14.240 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 85], 00:27:14.240 | 70.00th=[ 97], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 144], 00:27:14.240 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 190], 99.95th=[ 190], 00:27:14.240 | 99.99th=[ 190] 00:27:14.240 bw ( KiB/s): min= 592, max= 1168, per=4.09%, avg=759.60, stdev=158.25, samples=20 00:27:14.240 iops : min= 148, max= 292, avg=189.90, stdev=39.56, samples=20 00:27:14.240 lat (msec) : 50=10.39%, 100=62.56%, 250=27.05% 00:27:14.240 cpu : usr=32.49%, sys=0.76%, ctx=864, majf=0, minf=9 00:27:14.240 IO depths : 1=1.6%, 2=3.2%, 4=10.0%, 8=72.9%, 16=12.3%, 32=0.0%, >=64=0.0% 00:27:14.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 complete : 0=0.0%, 4=90.2%, 8=5.5%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.240 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.240 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.240 filename2: (groupid=0, jobs=1): err= 0: pid=97796: Wed Jul 24 17:27:58 2024 00:27:14.240 read: IOPS=211, BW=845KiB/s (865kB/s)(8484KiB/10038msec) 00:27:14.240 slat (nsec): min=5308, max=32743, avg=10373.29, stdev=3479.75 00:27:14.240 clat (msec): min=32, max=166, avg=75.60, stdev=24.14 00:27:14.240 lat (msec): min=32, max=166, avg=75.61, stdev=24.14 00:27:14.240 clat percentiles (msec): 00:27:14.240 | 1.00th=[ 36], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 55], 00:27:14.240 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:27:14.240 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 109], 95.00th=[ 120], 00:27:14.240 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:27:14.240 | 99.99th=[ 167] 00:27:14.240 bw ( KiB/s): min= 592, max= 1248, per=4.55%, avg=843.65, stdev=181.01, samples=20 00:27:14.241 iops : min= 148, max= 312, avg=210.90, stdev=45.24, samples=20 00:27:14.241 lat (msec) : 50=17.21%, 100=66.34%, 250=16.45% 00:27:14.241 cpu : usr=39.92%, sys=1.15%, ctx=1166, majf=0, minf=9 00:27:14.241 IO depths : 1=0.8%, 2=1.7%, 4=8.6%, 8=76.1%, 16=12.8%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=89.8%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=2121,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 filename2: (groupid=0, jobs=1): err= 0: pid=97797: Wed Jul 24 17:27:58 2024 00:27:14.241 read: IOPS=209, BW=836KiB/s (856kB/s)(8396KiB/10043msec) 00:27:14.241 slat (usec): min=4, max=4016, avg=15.62, stdev=141.47 00:27:14.241 clat (msec): min=32, max=169, avg=76.34, stdev=21.11 00:27:14.241 lat (msec): min=32, max=169, avg=76.35, stdev=21.11 00:27:14.241 clat percentiles (msec): 00:27:14.241 | 1.00th=[ 35], 5.00th=[ 44], 10.00th=[ 51], 20.00th=[ 61], 00:27:14.241 | 30.00th=[ 67], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 77], 00:27:14.241 | 70.00th=[ 82], 80.00th=[ 93], 90.00th=[ 107], 95.00th=[ 115], 00:27:14.241 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 169], 99.95th=[ 169], 00:27:14.241 | 99.99th=[ 169] 00:27:14.241 bw ( KiB/s): min= 640, max= 1015, per=4.50%, avg=835.15, stdev=114.52, samples=20 00:27:14.241 iops : min= 160, max= 253, avg=208.75, stdev=28.57, samples=20 00:27:14.241 lat (msec) : 50=9.67%, 100=74.85%, 250=15.48% 00:27:14.241 cpu : usr=43.19%, sys=1.38%, ctx=1694, majf=0, minf=9 00:27:14.241 IO depths : 1=1.8%, 2=4.0%, 4=12.5%, 8=70.6%, 16=11.0%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 filename2: (groupid=0, jobs=1): err= 0: pid=97798: Wed Jul 24 17:27:58 2024 00:27:14.241 read: IOPS=173, BW=692KiB/s (709kB/s)(6936KiB/10022msec) 00:27:14.241 slat (usec): min=4, max=1030, avg=11.13, stdev=24.78 00:27:14.241 clat (msec): min=35, max=194, avg=92.39, stdev=26.10 00:27:14.241 lat (msec): min=35, max=194, avg=92.40, stdev=26.10 00:27:14.241 clat percentiles (msec): 00:27:14.241 | 1.00th=[ 39], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 68], 00:27:14.241 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 95], 60.00th=[ 104], 00:27:14.241 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 126], 95.00th=[ 140], 00:27:14.241 | 99.00th=[ 148], 99.50th=[ 171], 99.90th=[ 194], 99.95th=[ 194], 00:27:14.241 | 99.99th=[ 194] 00:27:14.241 bw ( KiB/s): min= 512, max= 1024, per=3.70%, avg=687.15, stdev=146.50, samples=20 00:27:14.241 iops : min= 128, max= 256, avg=171.75, stdev=36.63, samples=20 00:27:14.241 lat (msec) : 50=2.83%, 100=52.88%, 250=44.29% 00:27:14.241 cpu : usr=44.68%, sys=1.38%, ctx=1670, majf=0, minf=9 00:27:14.241 IO depths : 1=2.8%, 2=6.5%, 4=17.4%, 8=63.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=1734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 filename2: (groupid=0, jobs=1): err= 0: pid=97799: Wed Jul 24 17:27:58 2024 00:27:14.241 read: IOPS=175, BW=703KiB/s (720kB/s)(7052KiB/10030msec) 00:27:14.241 slat (usec): min=3, max=8022, avg=24.27, stdev=284.07 00:27:14.241 clat (msec): min=40, max=182, avg=90.83, stdev=24.91 00:27:14.241 lat (msec): min=40, max=182, avg=90.85, stdev=24.92 00:27:14.241 clat percentiles (msec): 00:27:14.241 | 1.00th=[ 44], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 70], 00:27:14.241 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 93], 60.00th=[ 102], 00:27:14.241 | 70.00th=[ 107], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 133], 00:27:14.241 | 99.00th=[ 150], 99.50th=[ 163], 99.90th=[ 184], 99.95th=[ 184], 00:27:14.241 | 99.99th=[ 184] 00:27:14.241 bw ( KiB/s): min= 512, max= 944, per=3.76%, avg=698.55, stdev=123.66, samples=20 00:27:14.241 iops : min= 128, max= 236, avg=174.60, stdev=30.90, samples=20 00:27:14.241 lat (msec) : 50=5.16%, 100=52.24%, 250=42.60% 00:27:14.241 cpu : usr=41.06%, sys=1.18%, ctx=1227, majf=0, minf=9 00:27:14.241 IO depths : 1=3.0%, 2=6.6%, 4=16.9%, 8=63.8%, 16=9.7%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=91.9%, 8=2.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=1763,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 filename2: (groupid=0, jobs=1): err= 0: pid=97800: Wed Jul 24 17:27:58 2024 00:27:14.241 read: IOPS=183, BW=736KiB/s (754kB/s)(7368KiB/10013msec) 00:27:14.241 slat (usec): min=4, max=8020, avg=14.97, stdev=186.79 00:27:14.241 clat (msec): min=30, max=174, avg=86.89, stdev=27.77 00:27:14.241 lat (msec): min=30, max=174, avg=86.90, stdev=27.76 00:27:14.241 clat percentiles (msec): 00:27:14.241 | 1.00th=[ 35], 5.00th=[ 46], 10.00th=[ 50], 20.00th=[ 63], 00:27:14.241 | 30.00th=[ 70], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 99], 00:27:14.241 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 142], 00:27:14.241 | 99.00th=[ 159], 99.50th=[ 159], 99.90th=[ 176], 99.95th=[ 176], 00:27:14.241 | 99.99th=[ 176] 00:27:14.241 bw ( KiB/s): min= 512, max= 1168, per=3.94%, avg=730.45, stdev=189.56, samples=20 00:27:14.241 iops : min= 128, max= 292, avg=182.60, stdev=47.39, samples=20 00:27:14.241 lat (msec) : 50=10.37%, 100=53.15%, 250=36.48% 00:27:14.241 cpu : usr=43.52%, sys=1.42%, ctx=1357, majf=0, minf=9 00:27:14.241 IO depths : 1=2.1%, 2=4.3%, 4=13.2%, 8=69.4%, 16=10.9%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=90.7%, 8=4.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=1842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 filename2: (groupid=0, jobs=1): err= 0: pid=97801: Wed Jul 24 17:27:58 2024 00:27:14.241 read: IOPS=214, BW=858KiB/s (879kB/s)(8624KiB/10050msec) 00:27:14.241 slat (usec): min=7, max=8021, avg=15.68, stdev=182.36 00:27:14.241 clat (msec): min=14, max=178, avg=74.28, stdev=22.77 00:27:14.241 lat (msec): min=14, max=178, avg=74.29, stdev=22.77 00:27:14.241 clat percentiles (msec): 00:27:14.241 | 1.00th=[ 35], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 55], 00:27:14.241 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:27:14.241 | 70.00th=[ 84], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 109], 00:27:14.241 | 99.00th=[ 150], 99.50th=[ 150], 99.90th=[ 180], 99.95th=[ 180], 00:27:14.241 | 99.99th=[ 180] 00:27:14.241 bw ( KiB/s): min= 640, max= 1248, per=4.62%, avg=856.00, stdev=177.49, samples=20 00:27:14.241 iops : min= 160, max= 312, avg=214.00, stdev=44.37, samples=20 00:27:14.241 lat (msec) : 20=0.74%, 50=17.86%, 100=67.67%, 250=13.73% 00:27:14.241 cpu : usr=39.21%, sys=1.09%, ctx=1180, majf=0, minf=9 00:27:14.241 IO depths : 1=1.3%, 2=2.9%, 4=11.0%, 8=72.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=90.1%, 8=4.9%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=2156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 filename2: (groupid=0, jobs=1): err= 0: pid=97802: Wed Jul 24 17:27:58 2024 00:27:14.241 read: IOPS=168, BW=673KiB/s (689kB/s)(6732KiB/10006msec) 00:27:14.241 slat (usec): min=7, max=8021, avg=14.84, stdev=195.31 00:27:14.241 clat (msec): min=37, max=202, avg=94.95, stdev=27.89 00:27:14.241 lat (msec): min=37, max=202, avg=94.97, stdev=27.89 00:27:14.241 clat percentiles (msec): 00:27:14.241 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 71], 00:27:14.241 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 107], 00:27:14.241 | 70.00th=[ 108], 80.00th=[ 120], 90.00th=[ 132], 95.00th=[ 144], 00:27:14.241 | 99.00th=[ 169], 99.50th=[ 180], 99.90th=[ 203], 99.95th=[ 203], 00:27:14.241 | 99.99th=[ 203] 00:27:14.241 bw ( KiB/s): min= 384, max= 880, per=3.59%, avg=666.80, stdev=134.42, samples=20 00:27:14.241 iops : min= 96, max= 220, avg=166.70, stdev=33.61, samples=20 00:27:14.241 lat (msec) : 50=3.92%, 100=51.87%, 250=44.21% 00:27:14.241 cpu : usr=32.15%, sys=1.05%, ctx=842, majf=0, minf=9 00:27:14.241 IO depths : 1=3.1%, 2=6.7%, 4=16.3%, 8=64.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:27:14.241 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 complete : 0=0.0%, 4=91.9%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:14.241 issued rwts: total=1683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:14.241 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:14.241 00:27:14.241 Run status group 0 (all jobs): 00:27:14.241 READ: bw=18.1MiB/s (19.0MB/s), 664KiB/s-894KiB/s (680kB/s-916kB/s), io=182MiB (191MB), run=10006-10066msec 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:14.241 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 bdev_null0 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 [2024-07-24 17:27:58.835920] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 bdev_null1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.242 { 00:27:14.242 "params": { 00:27:14.242 "name": "Nvme$subsystem", 00:27:14.242 "trtype": "$TEST_TRANSPORT", 00:27:14.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.242 "adrfam": "ipv4", 00:27:14.242 "trsvcid": "$NVMF_PORT", 00:27:14.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.242 "hdgst": ${hdgst:-false}, 00:27:14.242 "ddgst": ${ddgst:-false} 00:27:14.242 }, 00:27:14.242 "method": "bdev_nvme_attach_controller" 00:27:14.242 } 00:27:14.242 EOF 00:27:14.242 )") 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:14.242 { 00:27:14.242 "params": { 00:27:14.242 "name": "Nvme$subsystem", 00:27:14.242 "trtype": "$TEST_TRANSPORT", 00:27:14.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:14.242 "adrfam": "ipv4", 00:27:14.242 "trsvcid": "$NVMF_PORT", 00:27:14.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:14.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:14.242 "hdgst": ${hdgst:-false}, 00:27:14.242 "ddgst": ${ddgst:-false} 00:27:14.242 }, 00:27:14.242 "method": "bdev_nvme_attach_controller" 00:27:14.242 } 00:27:14.242 EOF 00:27:14.242 )") 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:14.242 17:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:14.242 "params": { 00:27:14.242 "name": "Nvme0", 00:27:14.242 "trtype": "tcp", 00:27:14.242 "traddr": "10.0.0.2", 00:27:14.242 "adrfam": "ipv4", 00:27:14.242 "trsvcid": "4420", 00:27:14.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:14.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:14.243 "hdgst": false, 00:27:14.243 "ddgst": false 00:27:14.243 }, 00:27:14.243 "method": "bdev_nvme_attach_controller" 00:27:14.243 },{ 00:27:14.243 "params": { 00:27:14.243 "name": "Nvme1", 00:27:14.243 "trtype": "tcp", 00:27:14.243 "traddr": "10.0.0.2", 00:27:14.243 "adrfam": "ipv4", 00:27:14.243 "trsvcid": "4420", 00:27:14.243 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:14.243 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:14.243 "hdgst": false, 00:27:14.243 "ddgst": false 00:27:14.243 }, 00:27:14.243 "method": "bdev_nvme_attach_controller" 00:27:14.243 }' 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:14.243 17:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:14.243 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:14.243 ... 00:27:14.243 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:14.243 ... 00:27:14.243 fio-3.35 00:27:14.243 Starting 4 threads 00:27:19.507 00:27:19.507 filename0: (groupid=0, jobs=1): err= 0: pid=97928: Wed Jul 24 17:28:04 2024 00:27:19.507 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5003msec) 00:27:19.507 slat (nsec): min=6663, max=60794, avg=10663.75, stdev=5159.20 00:27:19.507 clat (usec): min=2122, max=5038, avg=3931.74, stdev=167.71 00:27:19.507 lat (usec): min=2129, max=5049, avg=3942.40, stdev=167.74 00:27:19.507 clat percentiles (usec): 00:27:19.507 | 1.00th=[ 3621], 5.00th=[ 3720], 10.00th=[ 3752], 20.00th=[ 3818], 00:27:19.507 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:27:19.507 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4113], 95.00th=[ 4178], 00:27:19.507 | 99.00th=[ 4359], 99.50th=[ 4490], 99.90th=[ 4752], 99.95th=[ 4883], 00:27:19.507 | 99.99th=[ 5014] 00:27:19.507 bw ( KiB/s): min=15872, max=16208, per=24.94%, avg=16076.44, stdev=105.76, samples=9 00:27:19.507 iops : min= 1984, max= 2026, avg=2009.56, stdev=13.22, samples=9 00:27:19.507 lat (msec) : 4=68.53%, 10=31.47% 00:27:19.507 cpu : usr=93.34%, sys=5.42%, ctx=13, majf=0, minf=0 00:27:19.507 IO depths : 1=4.7%, 2=9.6%, 4=65.4%, 8=20.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.507 complete : 0=0.0%, 4=89.6%, 8=10.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.507 issued rwts: total=10075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.507 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.508 filename0: (groupid=0, jobs=1): err= 0: pid=97929: Wed Jul 24 17:28:04 2024 00:27:19.508 read: IOPS=2011, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5002msec) 00:27:19.508 slat (usec): min=6, max=116, avg=13.98, stdev= 5.40 00:27:19.508 clat (usec): min=2846, max=6677, avg=3909.51, stdev=176.61 00:27:19.508 lat (usec): min=2859, max=6701, avg=3923.48, stdev=176.54 00:27:19.508 clat percentiles (usec): 00:27:19.508 | 1.00th=[ 3589], 5.00th=[ 3687], 10.00th=[ 3720], 20.00th=[ 3785], 00:27:19.508 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3949], 00:27:19.508 | 70.00th=[ 3982], 80.00th=[ 4047], 90.00th=[ 4113], 95.00th=[ 4146], 00:27:19.508 | 99.00th=[ 4293], 99.50th=[ 4424], 99.90th=[ 5014], 99.95th=[ 6652], 00:27:19.508 | 99.99th=[ 6652] 00:27:19.508 bw ( KiB/s): min=15872, max=16256, per=24.92%, avg=16060.33, stdev=124.55, samples=9 00:27:19.508 iops : min= 1984, max= 2032, avg=2007.44, stdev=15.71, samples=9 00:27:19.508 lat (msec) : 4=73.64%, 10=26.36% 00:27:19.508 cpu : usr=94.10%, sys=4.64%, ctx=50, majf=0, minf=9 00:27:19.508 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.508 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.508 issued rwts: total=10064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.508 filename1: (groupid=0, jobs=1): err= 0: pid=97930: Wed Jul 24 17:28:04 2024 00:27:19.508 read: IOPS=2020, BW=15.8MiB/s (16.6MB/s)(79.0MiB/5001msec) 00:27:19.508 slat (usec): min=4, max=139, avg= 8.59, stdev= 3.89 00:27:19.508 clat (usec): min=1128, max=4895, avg=3915.23, stdev=240.87 00:27:19.508 lat (usec): min=1135, max=4902, avg=3923.82, stdev=240.96 00:27:19.508 clat percentiles (usec): 00:27:19.508 | 1.00th=[ 3458], 5.00th=[ 3687], 10.00th=[ 3752], 20.00th=[ 3818], 00:27:19.508 | 30.00th=[ 3851], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:27:19.508 | 70.00th=[ 4015], 80.00th=[ 4047], 90.00th=[ 4113], 95.00th=[ 4178], 00:27:19.508 | 99.00th=[ 4359], 99.50th=[ 4424], 99.90th=[ 4621], 99.95th=[ 4621], 00:27:19.508 | 99.99th=[ 4752] 00:27:19.508 bw ( KiB/s): min=15872, max=16544, per=25.05%, avg=16145.78, stdev=211.88, samples=9 00:27:19.508 iops : min= 1984, max= 2068, avg=2018.22, stdev=26.48, samples=9 00:27:19.508 lat (msec) : 2=0.48%, 4=69.31%, 10=30.21% 00:27:19.508 cpu : usr=93.56%, sys=4.94%, ctx=73, majf=0, minf=0 00:27:19.508 IO depths : 1=10.5%, 2=24.1%, 4=50.8%, 8=14.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.508 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.508 issued rwts: total=10106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.508 filename1: (groupid=0, jobs=1): err= 0: pid=97931: Wed Jul 24 17:28:04 2024 00:27:19.508 read: IOPS=2012, BW=15.7MiB/s (16.5MB/s)(78.6MiB/5001msec) 00:27:19.508 slat (nsec): min=6700, max=81189, avg=13903.33, stdev=5448.68 00:27:19.508 clat (usec): min=2613, max=6358, avg=3903.24, stdev=168.51 00:27:19.508 lat (usec): min=2625, max=6372, avg=3917.15, stdev=169.10 00:27:19.508 clat percentiles (usec): 00:27:19.508 | 1.00th=[ 3621], 5.00th=[ 3687], 10.00th=[ 3720], 20.00th=[ 3785], 00:27:19.508 | 30.00th=[ 3818], 40.00th=[ 3851], 50.00th=[ 3884], 60.00th=[ 3916], 00:27:19.508 | 70.00th=[ 3982], 80.00th=[ 4015], 90.00th=[ 4080], 95.00th=[ 4146], 00:27:19.508 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 5866], 99.95th=[ 5932], 00:27:19.508 | 99.99th=[ 5932] 00:27:19.508 bw ( KiB/s): min=15872, max=16256, per=24.92%, avg=16060.33, stdev=124.55, samples=9 00:27:19.508 iops : min= 1984, max= 2032, avg=2007.44, stdev=15.71, samples=9 00:27:19.508 lat (msec) : 4=74.97%, 10=25.03% 00:27:19.508 cpu : usr=94.26%, sys=4.56%, ctx=46, majf=0, minf=9 00:27:19.508 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:19.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.508 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.508 issued rwts: total=10064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:19.508 00:27:19.508 Run status group 0 (all jobs): 00:27:19.508 READ: bw=62.9MiB/s (66.0MB/s), 15.7MiB/s-15.8MiB/s (16.5MB/s-16.6MB/s), io=315MiB (330MB), run=5001-5003msec 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 00:27:19.508 real 0m23.673s 00:27:19.508 user 2m6.980s 00:27:19.508 sys 0m5.275s 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:19.508 17:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 ************************************ 00:27:19.508 END TEST fio_dif_rand_params 00:27:19.508 ************************************ 00:27:19.508 17:28:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:19.508 17:28:05 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:19.508 17:28:05 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:19.508 17:28:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 ************************************ 00:27:19.508 START TEST fio_dif_digest 00:27:19.508 ************************************ 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 bdev_null0 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:19.508 [2024-07-24 17:28:05.074068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:19.508 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:19.509 { 00:27:19.509 "params": { 00:27:19.509 "name": "Nvme$subsystem", 00:27:19.509 "trtype": "$TEST_TRANSPORT", 00:27:19.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:19.509 "adrfam": "ipv4", 00:27:19.509 "trsvcid": "$NVMF_PORT", 00:27:19.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:19.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:19.509 "hdgst": ${hdgst:-false}, 00:27:19.509 "ddgst": ${ddgst:-false} 00:27:19.509 }, 00:27:19.509 "method": "bdev_nvme_attach_controller" 00:27:19.509 } 00:27:19.509 EOF 00:27:19.509 )") 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:19.509 "params": { 00:27:19.509 "name": "Nvme0", 00:27:19.509 "trtype": "tcp", 00:27:19.509 "traddr": "10.0.0.2", 00:27:19.509 "adrfam": "ipv4", 00:27:19.509 "trsvcid": "4420", 00:27:19.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:19.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:19.509 "hdgst": true, 00:27:19.509 "ddgst": true 00:27:19.509 }, 00:27:19.509 "method": "bdev_nvme_attach_controller" 00:27:19.509 }' 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:19.509 17:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:19.509 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:19.509 ... 00:27:19.509 fio-3.35 00:27:19.509 Starting 3 threads 00:27:31.712 00:27:31.712 filename0: (groupid=0, jobs=1): err= 0: pid=98037: Wed Jul 24 17:28:15 2024 00:27:31.712 read: IOPS=183, BW=22.9MiB/s (24.0MB/s)(229MiB/10004msec) 00:27:31.712 slat (nsec): min=6694, max=56493, avg=11517.74, stdev=4149.19 00:27:31.712 clat (usec): min=4499, max=20245, avg=16371.70, stdev=1051.34 00:27:31.712 lat (usec): min=4510, max=20256, avg=16383.22, stdev=1051.46 00:27:31.712 clat percentiles (usec): 00:27:31.712 | 1.00th=[13960], 5.00th=[14746], 10.00th=[15139], 20.00th=[15533], 00:27:31.712 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:27:31.712 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17695], 95.00th=[18220], 00:27:31.712 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19530], 99.95th=[20317], 00:27:31.712 | 99.99th=[20317] 00:27:31.712 bw ( KiB/s): min=22272, max=24320, per=27.28%, avg=23374.32, stdev=503.06, samples=19 00:27:31.712 iops : min= 174, max= 190, avg=182.58, stdev= 3.91, samples=19 00:27:31.712 lat (msec) : 10=0.05%, 20=99.89%, 50=0.05% 00:27:31.712 cpu : usr=92.94%, sys=5.69%, ctx=23, majf=0, minf=9 00:27:31.712 IO depths : 1=10.6%, 2=89.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.712 issued rwts: total=1831,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.712 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:31.712 filename0: (groupid=0, jobs=1): err= 0: pid=98038: Wed Jul 24 17:28:15 2024 00:27:31.712 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(325MiB/10005msec) 00:27:31.712 slat (nsec): min=6797, max=59966, avg=13206.73, stdev=4703.17 00:27:31.712 clat (usec): min=4824, max=15618, avg=11518.56, stdev=846.60 00:27:31.712 lat (usec): min=4835, max=15639, avg=11531.76, stdev=847.04 00:27:31.712 clat percentiles (usec): 00:27:31.712 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:27:31.712 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:27:31.712 | 70.00th=[11863], 80.00th=[12256], 90.00th=[12649], 95.00th=[12911], 00:27:31.712 | 99.00th=[13566], 99.50th=[13829], 99.90th=[14746], 99.95th=[15664], 00:27:31.712 | 99.99th=[15664] 00:27:31.712 bw ( KiB/s): min=32000, max=34560, per=38.81%, avg=33253.05, stdev=713.41, samples=19 00:27:31.712 iops : min= 250, max= 270, avg=259.79, stdev= 5.57, samples=19 00:27:31.712 lat (msec) : 10=3.80%, 20=96.20% 00:27:31.712 cpu : usr=91.33%, sys=6.71%, ctx=50, majf=0, minf=9 00:27:31.712 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.712 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.712 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:31.712 filename0: (groupid=0, jobs=1): err= 0: pid=98039: Wed Jul 24 17:28:15 2024 00:27:31.712 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(283MiB/10004msec) 00:27:31.712 slat (nsec): min=6669, max=59943, avg=11654.49, stdev=4597.91 00:27:31.712 clat (usec): min=9568, max=17742, avg=13231.29, stdev=990.40 00:27:31.712 lat (usec): min=9579, max=17767, avg=13242.94, stdev=990.36 00:27:31.712 clat percentiles (usec): 00:27:31.712 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:27:31.712 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:27:31.712 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:27:31.712 | 99.00th=[15664], 99.50th=[15795], 99.90th=[17171], 99.95th=[17433], 00:27:31.712 | 99.99th=[17695] 00:27:31.712 bw ( KiB/s): min=27904, max=29952, per=33.82%, avg=28984.84, stdev=566.80, samples=19 00:27:31.712 iops : min= 218, max= 234, avg=226.42, stdev= 4.45, samples=19 00:27:31.712 lat (msec) : 10=0.09%, 20=99.91% 00:27:31.712 cpu : usr=92.69%, sys=5.77%, ctx=15, majf=0, minf=0 00:27:31.712 IO depths : 1=6.2%, 2=93.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.712 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.712 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:31.712 00:27:31.712 Run status group 0 (all jobs): 00:27:31.712 READ: bw=83.7MiB/s (87.7MB/s), 22.9MiB/s-32.5MiB/s (24.0MB/s-34.1MB/s), io=837MiB (878MB), run=10004-10005msec 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.712 00:27:31.712 real 0m10.957s 00:27:31.712 user 0m28.352s 00:27:31.712 sys 0m2.066s 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.712 17:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:31.712 ************************************ 00:27:31.712 END TEST fio_dif_digest 00:27:31.712 ************************************ 00:27:31.712 17:28:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:31.712 17:28:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:31.712 rmmod nvme_tcp 00:27:31.712 rmmod nvme_fabrics 00:27:31.712 rmmod nvme_keyring 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 97281 ']' 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 97281 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97281 ']' 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97281 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97281 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:31.712 killing process with pid 97281 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97281' 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97281 00:27:31.712 17:28:16 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97281 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:31.712 17:28:16 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:31.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:31.713 Waiting for block devices as requested 00:27:31.713 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.713 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.713 17:28:16 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:31.713 17:28:16 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:31.713 17:28:16 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.713 17:28:16 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:31.713 17:28:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.713 17:28:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.713 17:28:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.713 17:28:16 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:31.713 00:27:31.713 real 0m59.939s 00:27:31.713 user 3m52.454s 00:27:31.713 sys 0m14.750s 00:27:31.713 17:28:16 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:31.713 17:28:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 ************************************ 00:27:31.713 END TEST nvmf_dif 00:27:31.713 ************************************ 00:27:31.713 17:28:16 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:31.713 17:28:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:31.713 17:28:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:31.713 17:28:16 -- common/autotest_common.sh@10 -- # set +x 00:27:31.713 ************************************ 00:27:31.713 START TEST nvmf_abort_qd_sizes 00:27:31.713 ************************************ 00:27:31.713 17:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:31.713 * Looking for test storage... 00:27:31.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:31.713 Cannot find device "nvmf_tgt_br" 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:31.713 Cannot find device "nvmf_tgt_br2" 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:31.713 Cannot find device "nvmf_tgt_br" 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:31.713 Cannot find device "nvmf_tgt_br2" 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:31.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:31.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:31.713 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:31.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:27:31.714 00:27:31.714 --- 10.0.0.2 ping statistics --- 00:27:31.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.714 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:31.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:31.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:27:31.714 00:27:31.714 --- 10.0.0.3 ping statistics --- 00:27:31.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.714 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:31.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:31.714 00:27:31.714 --- 10.0.0.1 ping statistics --- 00:27:31.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.714 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:31.714 17:28:17 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:31.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:31.972 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:31.972 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=98631 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 98631 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98631 ']' 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:32.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:32.230 17:28:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:32.230 [2024-07-24 17:28:18.303843] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:32.230 [2024-07-24 17:28:18.303943] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:32.230 [2024-07-24 17:28:18.440629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:32.489 [2024-07-24 17:28:18.542600] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:32.489 [2024-07-24 17:28:18.542659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:32.489 [2024-07-24 17:28:18.542673] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:32.489 [2024-07-24 17:28:18.542684] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:32.489 [2024-07-24 17:28:18.542693] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:32.489 [2024-07-24 17:28:18.542816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:32.489 [2024-07-24 17:28:18.542950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.489 [2024-07-24 17:28:18.543562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:32.489 [2024-07-24 17:28:18.543569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:33.424 17:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:33.424 ************************************ 00:27:33.424 START TEST spdk_target_abort 00:27:33.424 ************************************ 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.424 spdk_targetn1 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.424 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.425 [2024-07-24 17:28:19.495535] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:33.425 [2024-07-24 17:28:19.523725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:33.425 17:28:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:36.738 Initializing NVMe Controllers 00:27:36.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:36.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:36.738 Initialization complete. Launching workers. 00:27:36.738 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10726, failed: 0 00:27:36.738 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1103, failed to submit 9623 00:27:36.738 success 754, unsuccess 349, failed 0 00:27:36.738 17:28:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:36.738 17:28:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:40.020 Initializing NVMe Controllers 00:27:40.020 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:40.020 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:40.020 Initialization complete. Launching workers. 00:27:40.020 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6017, failed: 0 00:27:40.020 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 4752 00:27:40.020 success 271, unsuccess 994, failed 0 00:27:40.020 17:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:40.020 17:28:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:43.306 Initializing NVMe Controllers 00:27:43.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:43.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:43.306 Initialization complete. Launching workers. 00:27:43.306 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30832, failed: 0 00:27:43.306 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2599, failed to submit 28233 00:27:43.306 success 477, unsuccess 2122, failed 0 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.306 17:28:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98631 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98631 ']' 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98631 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98631 00:27:43.873 killing process with pid 98631 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98631' 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98631 00:27:43.873 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98631 00:27:44.132 ************************************ 00:27:44.132 END TEST spdk_target_abort 00:27:44.132 ************************************ 00:27:44.132 00:27:44.132 real 0m10.883s 00:27:44.132 user 0m44.714s 00:27:44.132 sys 0m1.726s 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.132 17:28:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:44.132 17:28:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:44.132 17:28:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:44.132 17:28:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:44.132 ************************************ 00:27:44.132 START TEST kernel_target_abort 00:27:44.132 ************************************ 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:44.132 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:44.391 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:44.391 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:44.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:44.650 Waiting for block devices as requested 00:27:44.650 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:44.650 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:44.909 No valid GPT data, bailing 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n2 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:44.909 17:28:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:44.909 No valid GPT data, bailing 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n3 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:44.909 No valid GPT data, bailing 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme1n1 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:44.909 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:45.168 No valid GPT data, bailing 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b --hostid=6e739a91-f4c2-49d2-9c71-a65eb7d5315b -a 10.0.0.1 -t tcp -s 4420 00:27:45.168 00:27:45.168 Discovery Log Number of Records 2, Generation counter 2 00:27:45.168 =====Discovery Log Entry 0====== 00:27:45.168 trtype: tcp 00:27:45.168 adrfam: ipv4 00:27:45.168 subtype: current discovery subsystem 00:27:45.168 treq: not specified, sq flow control disable supported 00:27:45.168 portid: 1 00:27:45.168 trsvcid: 4420 00:27:45.168 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:45.168 traddr: 10.0.0.1 00:27:45.168 eflags: none 00:27:45.168 sectype: none 00:27:45.168 =====Discovery Log Entry 1====== 00:27:45.168 trtype: tcp 00:27:45.168 adrfam: ipv4 00:27:45.168 subtype: nvme subsystem 00:27:45.168 treq: not specified, sq flow control disable supported 00:27:45.168 portid: 1 00:27:45.168 trsvcid: 4420 00:27:45.168 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:45.168 traddr: 10.0.0.1 00:27:45.168 eflags: none 00:27:45.168 sectype: none 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:45.168 17:28:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:48.453 Initializing NVMe Controllers 00:27:48.453 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:48.453 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:48.453 Initialization complete. Launching workers. 00:27:48.453 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31724, failed: 0 00:27:48.453 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31724, failed to submit 0 00:27:48.453 success 0, unsuccess 31724, failed 0 00:27:48.453 17:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:48.453 17:28:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:51.757 Initializing NVMe Controllers 00:27:51.757 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:51.757 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:51.757 Initialization complete. Launching workers. 00:27:51.757 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64752, failed: 0 00:27:51.757 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25730, failed to submit 39022 00:27:51.757 success 0, unsuccess 25730, failed 0 00:27:51.757 17:28:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:51.757 17:28:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:55.039 Initializing NVMe Controllers 00:27:55.039 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:55.039 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:55.039 Initialization complete. Launching workers. 00:27:55.039 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74837, failed: 0 00:27:55.039 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18668, failed to submit 56169 00:27:55.039 success 0, unsuccess 18668, failed 0 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:55.039 17:28:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:55.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:57.207 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:57.207 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:57.207 00:27:57.207 real 0m12.911s 00:27:57.207 user 0m5.654s 00:27:57.207 sys 0m4.555s 00:27:57.207 ************************************ 00:27:57.207 END TEST kernel_target_abort 00:27:57.207 ************************************ 00:27:57.207 17:28:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:57.207 17:28:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.207 rmmod nvme_tcp 00:27:57.207 rmmod nvme_fabrics 00:27:57.207 rmmod nvme_keyring 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 98631 ']' 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 98631 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98631 ']' 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98631 00:27:57.207 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98631) - No such process 00:27:57.207 Process with pid 98631 is not found 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98631 is not found' 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:57.207 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:57.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:57.727 Waiting for block devices as requested 00:27:57.727 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:57.727 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:57.727 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:57.985 17:28:43 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:57.985 00:27:57.985 real 0m26.990s 00:27:57.985 user 0m51.554s 00:27:57.985 sys 0m7.582s 00:27:57.985 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:57.985 17:28:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:57.985 ************************************ 00:27:57.985 END TEST nvmf_abort_qd_sizes 00:27:57.985 ************************************ 00:27:57.985 17:28:44 -- spdk/autotest.sh@299 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:57.985 17:28:44 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:57.985 17:28:44 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:57.985 17:28:44 -- common/autotest_common.sh@10 -- # set +x 00:27:57.985 ************************************ 00:27:57.985 START TEST keyring_file 00:27:57.985 ************************************ 00:27:57.985 17:28:44 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:27:57.985 * Looking for test storage... 00:27:57.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:27:57.985 17:28:44 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:27:57.985 17:28:44 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:57.985 17:28:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:57.986 17:28:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.986 17:28:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.986 17:28:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.986 17:28:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.986 17:28:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.986 17:28:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.986 17:28:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:57.986 17:28:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.6TjsaYu9ha 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.6TjsaYu9ha 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.6TjsaYu9ha 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.6TjsaYu9ha 00:27:57.986 17:28:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UPA8pA4OPe 00:27:57.986 17:28:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:57.986 17:28:44 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:58.244 17:28:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UPA8pA4OPe 00:27:58.244 17:28:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UPA8pA4OPe 00:27:58.244 17:28:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.UPA8pA4OPe 00:27:58.244 17:28:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=99512 00:27:58.244 17:28:44 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:58.244 17:28:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99512 00:27:58.244 17:28:44 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99512 ']' 00:27:58.245 17:28:44 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.245 17:28:44 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:58.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.245 17:28:44 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.245 17:28:44 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:58.245 17:28:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:58.245 [2024-07-24 17:28:44.314741] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:58.245 [2024-07-24 17:28:44.314860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99512 ] 00:27:58.245 [2024-07-24 17:28:44.450675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.503 [2024-07-24 17:28:44.564165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.096 17:28:45 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:59.096 17:28:45 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:59.096 17:28:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:59.096 17:28:45 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.096 17:28:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.096 [2024-07-24 17:28:45.291907] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.096 null0 00:27:59.096 [2024-07-24 17:28:45.323886] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:59.097 [2024-07-24 17:28:45.324083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.355 [2024-07-24 17:28:45.331880] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.355 17:28:45 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.355 17:28:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.355 [2024-07-24 17:28:45.343872] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:59.355 2024/07/24 17:28:45 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:27:59.355 request: 00:27:59.355 { 00:27:59.355 "method": "nvmf_subsystem_add_listener", 00:27:59.355 "params": { 00:27:59.356 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.356 "secure_channel": false, 00:27:59.356 "listen_address": { 00:27:59.356 "trtype": "tcp", 00:27:59.356 "traddr": "127.0.0.1", 00:27:59.356 "trsvcid": "4420" 00:27:59.356 } 00:27:59.356 } 00:27:59.356 } 00:27:59.356 Got JSON-RPC error response 00:27:59.356 GoRPCClient: error on JSON-RPC call 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:59.356 17:28:45 keyring_file -- keyring/file.sh@46 -- # bperfpid=99547 00:27:59.356 17:28:45 keyring_file -- keyring/file.sh@48 -- # waitforlisten 99547 /var/tmp/bperf.sock 00:27:59.356 17:28:45 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99547 ']' 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:59.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:59.356 17:28:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:59.356 [2024-07-24 17:28:45.398324] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:27:59.356 [2024-07-24 17:28:45.398388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99547 ] 00:27:59.356 [2024-07-24 17:28:45.531435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.614 [2024-07-24 17:28:45.616649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.181 17:28:46 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:00.181 17:28:46 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:00.181 17:28:46 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:00.181 17:28:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:00.440 17:28:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UPA8pA4OPe 00:28:00.440 17:28:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UPA8pA4OPe 00:28:00.702 17:28:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:00.702 17:28:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:00.702 17:28:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.702 17:28:46 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.702 17:28:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.960 17:28:47 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.6TjsaYu9ha == \/\t\m\p\/\t\m\p\.\6\T\j\s\a\Y\u\9\h\a ]] 00:28:00.960 17:28:47 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:00.960 17:28:47 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:00.960 17:28:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.960 17:28:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.960 17:28:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:01.218 17:28:47 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.UPA8pA4OPe == \/\t\m\p\/\t\m\p\.\U\P\A\8\p\A\4\O\P\e ]] 00:28:01.218 17:28:47 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:01.218 17:28:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:01.218 17:28:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.218 17:28:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.218 17:28:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.219 17:28:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:01.477 17:28:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:01.477 17:28:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:01.477 17:28:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:01.477 17:28:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.477 17:28:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.477 17:28:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.477 17:28:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:01.735 17:28:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:01.735 17:28:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.735 17:28:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.992 [2024-07-24 17:28:48.072336] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:01.992 nvme0n1 00:28:01.992 17:28:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:01.992 17:28:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:01.992 17:28:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:01.992 17:28:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:01.992 17:28:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.992 17:28:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:02.250 17:28:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:02.250 17:28:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:02.250 17:28:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:02.250 17:28:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:02.250 17:28:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:02.250 17:28:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:02.250 17:28:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:02.508 17:28:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:02.508 17:28:48 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.766 Running I/O for 1 seconds... 00:28:03.700 00:28:03.700 Latency(us) 00:28:03.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.700 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:03.700 nvme0n1 : 1.01 12846.69 50.18 0.00 0.00 9926.37 5987.61 21328.99 00:28:03.700 =================================================================================================================== 00:28:03.700 Total : 12846.69 50.18 0.00 0.00 9926.37 5987.61 21328.99 00:28:03.700 0 00:28:03.700 17:28:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:03.700 17:28:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:03.959 17:28:50 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:03.959 17:28:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:03.959 17:28:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:03.959 17:28:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:03.959 17:28:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:03.959 17:28:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.217 17:28:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:04.217 17:28:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:04.217 17:28:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:04.217 17:28:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.217 17:28:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.218 17:28:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.218 17:28:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:04.476 17:28:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:04.476 17:28:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:04.476 17:28:50 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.476 17:28:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:04.735 [2024-07-24 17:28:50.909349] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:04.735 [2024-07-24 17:28:50.909941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119af30 (107): Transport endpoint is not connected 00:28:04.735 [2024-07-24 17:28:50.910929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119af30 (9): Bad file descriptor 00:28:04.735 [2024-07-24 17:28:50.911927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:04.735 [2024-07-24 17:28:50.911965] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:04.735 [2024-07-24 17:28:50.911991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:04.735 2024/07/24 17:28:50 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:04.735 request: 00:28:04.735 { 00:28:04.735 "method": "bdev_nvme_attach_controller", 00:28:04.735 "params": { 00:28:04.735 "name": "nvme0", 00:28:04.735 "trtype": "tcp", 00:28:04.735 "traddr": "127.0.0.1", 00:28:04.735 "adrfam": "ipv4", 00:28:04.735 "trsvcid": "4420", 00:28:04.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.735 "prchk_reftag": false, 00:28:04.735 "prchk_guard": false, 00:28:04.735 "hdgst": false, 00:28:04.735 "ddgst": false, 00:28:04.735 "psk": "key1" 00:28:04.735 } 00:28:04.735 } 00:28:04.735 Got JSON-RPC error response 00:28:04.735 GoRPCClient: error on JSON-RPC call 00:28:04.735 17:28:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:04.735 17:28:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:04.735 17:28:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:04.735 17:28:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:04.735 17:28:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:04.735 17:28:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:04.735 17:28:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.735 17:28:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.735 17:28:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:04.735 17:28:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.993 17:28:51 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:04.993 17:28:51 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:04.993 17:28:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.993 17:28:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:04.993 17:28:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.993 17:28:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.993 17:28:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:05.251 17:28:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:05.251 17:28:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:05.251 17:28:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:05.509 17:28:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:05.509 17:28:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:05.767 17:28:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:05.767 17:28:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:05.767 17:28:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.025 17:28:52 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:06.025 17:28:52 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.6TjsaYu9ha 00:28:06.025 17:28:52 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:06.025 17:28:52 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:06.025 17:28:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:06.283 [2024-07-24 17:28:52.446388] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.6TjsaYu9ha': 0100660 00:28:06.283 [2024-07-24 17:28:52.446435] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:06.283 2024/07/24 17:28:52 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.6TjsaYu9ha], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:28:06.283 request: 00:28:06.283 { 00:28:06.283 "method": "keyring_file_add_key", 00:28:06.283 "params": { 00:28:06.283 "name": "key0", 00:28:06.283 "path": "/tmp/tmp.6TjsaYu9ha" 00:28:06.283 } 00:28:06.283 } 00:28:06.283 Got JSON-RPC error response 00:28:06.283 GoRPCClient: error on JSON-RPC call 00:28:06.283 17:28:52 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:06.283 17:28:52 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:06.283 17:28:52 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:06.283 17:28:52 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:06.283 17:28:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.6TjsaYu9ha 00:28:06.284 17:28:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:06.284 17:28:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.6TjsaYu9ha 00:28:06.541 17:28:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.6TjsaYu9ha 00:28:06.541 17:28:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:06.541 17:28:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:06.541 17:28:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:06.541 17:28:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:06.541 17:28:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:06.541 17:28:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:07.107 17:28:53 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:07.107 17:28:53 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.107 17:28:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.107 [2024-07-24 17:28:53.238614] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.6TjsaYu9ha': No such file or directory 00:28:07.107 [2024-07-24 17:28:53.238663] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:07.107 [2024-07-24 17:28:53.238703] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:07.107 [2024-07-24 17:28:53.238711] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:07.107 [2024-07-24 17:28:53.238720] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:07.107 2024/07/24 17:28:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:28:07.107 request: 00:28:07.107 { 00:28:07.107 "method": "bdev_nvme_attach_controller", 00:28:07.107 "params": { 00:28:07.107 "name": "nvme0", 00:28:07.107 "trtype": "tcp", 00:28:07.107 "traddr": "127.0.0.1", 00:28:07.107 "adrfam": "ipv4", 00:28:07.107 "trsvcid": "4420", 00:28:07.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.107 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:07.107 "prchk_reftag": false, 00:28:07.107 "prchk_guard": false, 00:28:07.107 "hdgst": false, 00:28:07.107 "ddgst": false, 00:28:07.107 "psk": "key0" 00:28:07.107 } 00:28:07.107 } 00:28:07.107 Got JSON-RPC error response 00:28:07.107 GoRPCClient: error on JSON-RPC call 00:28:07.107 17:28:53 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:28:07.108 17:28:53 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:07.108 17:28:53 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:07.108 17:28:53 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:07.108 17:28:53 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:07.108 17:28:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:07.366 17:28:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.hYe7vSY54n 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:07.366 17:28:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:07.366 17:28:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:07.366 17:28:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:07.366 17:28:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:07.366 17:28:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:07.366 17:28:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hYe7vSY54n 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.hYe7vSY54n 00:28:07.366 17:28:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.hYe7vSY54n 00:28:07.366 17:28:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hYe7vSY54n 00:28:07.366 17:28:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hYe7vSY54n 00:28:07.933 17:28:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.933 17:28:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:07.933 nvme0n1 00:28:07.933 17:28:54 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:08.191 17:28:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:08.191 17:28:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.191 17:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.191 17:28:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.191 17:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.191 17:28:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:08.191 17:28:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:08.191 17:28:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:08.449 17:28:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:08.449 17:28:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:08.449 17:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.449 17:28:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.449 17:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.708 17:28:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:08.708 17:28:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:08.967 17:28:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:08.967 17:28:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:08.967 17:28:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:08.967 17:28:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:08.967 17:28:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:08.967 17:28:55 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:08.967 17:28:55 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:08.967 17:28:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:09.226 17:28:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:09.226 17:28:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.226 17:28:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:09.793 17:28:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:09.793 17:28:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hYe7vSY54n 00:28:09.793 17:28:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hYe7vSY54n 00:28:09.793 17:28:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.UPA8pA4OPe 00:28:09.793 17:28:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.UPA8pA4OPe 00:28:10.055 17:28:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:10.055 17:28:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:10.313 nvme0n1 00:28:10.313 17:28:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:10.313 17:28:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:10.881 17:28:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:10.882 "subsystems": [ 00:28:10.882 { 00:28:10.882 "subsystem": "keyring", 00:28:10.882 "config": [ 00:28:10.882 { 00:28:10.882 "method": "keyring_file_add_key", 00:28:10.882 "params": { 00:28:10.882 "name": "key0", 00:28:10.882 "path": "/tmp/tmp.hYe7vSY54n" 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "keyring_file_add_key", 00:28:10.882 "params": { 00:28:10.882 "name": "key1", 00:28:10.882 "path": "/tmp/tmp.UPA8pA4OPe" 00:28:10.882 } 00:28:10.882 } 00:28:10.882 ] 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "subsystem": "iobuf", 00:28:10.882 "config": [ 00:28:10.882 { 00:28:10.882 "method": "iobuf_set_options", 00:28:10.882 "params": { 00:28:10.882 "large_bufsize": 135168, 00:28:10.882 "large_pool_count": 1024, 00:28:10.882 "small_bufsize": 8192, 00:28:10.882 "small_pool_count": 8192 00:28:10.882 } 00:28:10.882 } 00:28:10.882 ] 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "subsystem": "sock", 00:28:10.882 "config": [ 00:28:10.882 { 00:28:10.882 "method": "sock_set_default_impl", 00:28:10.882 "params": { 00:28:10.882 "impl_name": "posix" 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "sock_impl_set_options", 00:28:10.882 "params": { 00:28:10.882 "enable_ktls": false, 00:28:10.882 "enable_placement_id": 0, 00:28:10.882 "enable_quickack": false, 00:28:10.882 "enable_recv_pipe": true, 00:28:10.882 "enable_zerocopy_send_client": false, 00:28:10.882 "enable_zerocopy_send_server": true, 00:28:10.882 "impl_name": "ssl", 00:28:10.882 "recv_buf_size": 4096, 00:28:10.882 "send_buf_size": 4096, 00:28:10.882 "tls_version": 0, 00:28:10.882 "zerocopy_threshold": 0 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "sock_impl_set_options", 00:28:10.882 "params": { 00:28:10.882 "enable_ktls": false, 00:28:10.882 "enable_placement_id": 0, 00:28:10.882 "enable_quickack": false, 00:28:10.882 "enable_recv_pipe": true, 00:28:10.882 "enable_zerocopy_send_client": false, 00:28:10.882 "enable_zerocopy_send_server": true, 00:28:10.882 "impl_name": "posix", 00:28:10.882 "recv_buf_size": 2097152, 00:28:10.882 "send_buf_size": 2097152, 00:28:10.882 "tls_version": 0, 00:28:10.882 "zerocopy_threshold": 0 00:28:10.882 } 00:28:10.882 } 00:28:10.882 ] 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "subsystem": "vmd", 00:28:10.882 "config": [] 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "subsystem": "accel", 00:28:10.882 "config": [ 00:28:10.882 { 00:28:10.882 "method": "accel_set_options", 00:28:10.882 "params": { 00:28:10.882 "buf_count": 2048, 00:28:10.882 "large_cache_size": 16, 00:28:10.882 "sequence_count": 2048, 00:28:10.882 "small_cache_size": 128, 00:28:10.882 "task_count": 2048 00:28:10.882 } 00:28:10.882 } 00:28:10.882 ] 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "subsystem": "bdev", 00:28:10.882 "config": [ 00:28:10.882 { 00:28:10.882 "method": "bdev_set_options", 00:28:10.882 "params": { 00:28:10.882 "bdev_auto_examine": true, 00:28:10.882 "bdev_io_cache_size": 256, 00:28:10.882 "bdev_io_pool_size": 65535, 00:28:10.882 "iobuf_large_cache_size": 16, 00:28:10.882 "iobuf_small_cache_size": 128 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "bdev_raid_set_options", 00:28:10.882 "params": { 00:28:10.882 "process_max_bandwidth_mb_sec": 0, 00:28:10.882 "process_window_size_kb": 1024 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "bdev_iscsi_set_options", 00:28:10.882 "params": { 00:28:10.882 "timeout_sec": 30 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "bdev_nvme_set_options", 00:28:10.882 "params": { 00:28:10.882 "action_on_timeout": "none", 00:28:10.882 "allow_accel_sequence": false, 00:28:10.882 "arbitration_burst": 0, 00:28:10.882 "bdev_retry_count": 3, 00:28:10.882 "ctrlr_loss_timeout_sec": 0, 00:28:10.882 "delay_cmd_submit": true, 00:28:10.882 "dhchap_dhgroups": [ 00:28:10.882 "null", 00:28:10.882 "ffdhe2048", 00:28:10.882 "ffdhe3072", 00:28:10.882 "ffdhe4096", 00:28:10.882 "ffdhe6144", 00:28:10.882 "ffdhe8192" 00:28:10.882 ], 00:28:10.882 "dhchap_digests": [ 00:28:10.882 "sha256", 00:28:10.882 "sha384", 00:28:10.882 "sha512" 00:28:10.882 ], 00:28:10.882 "disable_auto_failback": false, 00:28:10.882 "fast_io_fail_timeout_sec": 0, 00:28:10.882 "generate_uuids": false, 00:28:10.882 "high_priority_weight": 0, 00:28:10.882 "io_path_stat": false, 00:28:10.882 "io_queue_requests": 512, 00:28:10.882 "keep_alive_timeout_ms": 10000, 00:28:10.882 "low_priority_weight": 0, 00:28:10.882 "medium_priority_weight": 0, 00:28:10.882 "nvme_adminq_poll_period_us": 10000, 00:28:10.882 "nvme_error_stat": false, 00:28:10.882 "nvme_ioq_poll_period_us": 0, 00:28:10.882 "rdma_cm_event_timeout_ms": 0, 00:28:10.882 "rdma_max_cq_size": 0, 00:28:10.882 "rdma_srq_size": 0, 00:28:10.882 "reconnect_delay_sec": 0, 00:28:10.882 "timeout_admin_us": 0, 00:28:10.882 "timeout_us": 0, 00:28:10.882 "transport_ack_timeout": 0, 00:28:10.882 "transport_retry_count": 4, 00:28:10.882 "transport_tos": 0 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "bdev_nvme_attach_controller", 00:28:10.882 "params": { 00:28:10.882 "adrfam": "IPv4", 00:28:10.882 "ctrlr_loss_timeout_sec": 0, 00:28:10.882 "ddgst": false, 00:28:10.882 "fast_io_fail_timeout_sec": 0, 00:28:10.882 "hdgst": false, 00:28:10.882 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.882 "name": "nvme0", 00:28:10.882 "prchk_guard": false, 00:28:10.882 "prchk_reftag": false, 00:28:10.882 "psk": "key0", 00:28:10.882 "reconnect_delay_sec": 0, 00:28:10.882 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.882 "traddr": "127.0.0.1", 00:28:10.882 "trsvcid": "4420", 00:28:10.882 "trtype": "TCP" 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "bdev_nvme_set_hotplug", 00:28:10.882 "params": { 00:28:10.882 "enable": false, 00:28:10.882 "period_us": 100000 00:28:10.882 } 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "method": "bdev_wait_for_examine" 00:28:10.882 } 00:28:10.882 ] 00:28:10.882 }, 00:28:10.882 { 00:28:10.882 "subsystem": "nbd", 00:28:10.882 "config": [] 00:28:10.882 } 00:28:10.882 ] 00:28:10.882 }' 00:28:10.882 17:28:56 keyring_file -- keyring/file.sh@114 -- # killprocess 99547 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99547 ']' 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99547 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@955 -- # uname 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99547 00:28:10.882 killing process with pid 99547 00:28:10.882 Received shutdown signal, test time was about 1.000000 seconds 00:28:10.882 00:28:10.882 Latency(us) 00:28:10.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:10.882 =================================================================================================================== 00:28:10.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99547' 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@969 -- # kill 99547 00:28:10.882 17:28:56 keyring_file -- common/autotest_common.sh@974 -- # wait 99547 00:28:10.883 17:28:57 keyring_file -- keyring/file.sh@117 -- # bperfpid=100013 00:28:10.883 17:28:57 keyring_file -- keyring/file.sh@119 -- # waitforlisten 100013 /var/tmp/bperf.sock 00:28:10.883 17:28:57 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 100013 ']' 00:28:10.883 17:28:57 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:10.883 17:28:57 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:10.883 17:28:57 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:10.883 "subsystems": [ 00:28:10.883 { 00:28:10.883 "subsystem": "keyring", 00:28:10.883 "config": [ 00:28:10.883 { 00:28:10.883 "method": "keyring_file_add_key", 00:28:10.883 "params": { 00:28:10.883 "name": "key0", 00:28:10.883 "path": "/tmp/tmp.hYe7vSY54n" 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "keyring_file_add_key", 00:28:10.883 "params": { 00:28:10.883 "name": "key1", 00:28:10.883 "path": "/tmp/tmp.UPA8pA4OPe" 00:28:10.883 } 00:28:10.883 } 00:28:10.883 ] 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "subsystem": "iobuf", 00:28:10.883 "config": [ 00:28:10.883 { 00:28:10.883 "method": "iobuf_set_options", 00:28:10.883 "params": { 00:28:10.883 "large_bufsize": 135168, 00:28:10.883 "large_pool_count": 1024, 00:28:10.883 "small_bufsize": 8192, 00:28:10.883 "small_pool_count": 8192 00:28:10.883 } 00:28:10.883 } 00:28:10.883 ] 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "subsystem": "sock", 00:28:10.883 "config": [ 00:28:10.883 { 00:28:10.883 "method": "sock_set_default_impl", 00:28:10.883 "params": { 00:28:10.883 "impl_name": "posix" 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "sock_impl_set_options", 00:28:10.883 "params": { 00:28:10.883 "enable_ktls": false, 00:28:10.883 "enable_placement_id": 0, 00:28:10.883 "enable_quickack": false, 00:28:10.883 "enable_recv_pipe": true, 00:28:10.883 "enable_zerocopy_send_client": false, 00:28:10.883 "enable_zerocopy_send_server": true, 00:28:10.883 "impl_name": "ssl", 00:28:10.883 "recv_buf_size": 4096, 00:28:10.883 "send_buf_size": 4096, 00:28:10.883 "tls_version": 0, 00:28:10.883 "zerocopy_threshold": 0 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "sock_impl_set_options", 00:28:10.883 "params": { 00:28:10.883 "enable_ktls": false, 00:28:10.883 "enable_placement_id": 0, 00:28:10.883 "enable_quickack": false, 00:28:10.883 "enable_recv_pipe": true, 00:28:10.883 "enable_zerocopy_send_client": false, 00:28:10.883 "enable_zerocopy_send_server": true, 00:28:10.883 "impl_name": "posix", 00:28:10.883 "recv_buf_size": 2097152, 00:28:10.883 "send_buf_size": 2097152, 00:28:10.883 "tls_version": 0, 00:28:10.883 "zerocopy_threshold": 0 00:28:10.883 } 00:28:10.883 } 00:28:10.883 ] 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "subsystem": "vmd", 00:28:10.883 "config": [] 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "subsystem": "accel", 00:28:10.883 "config": [ 00:28:10.883 { 00:28:10.883 "method": "accel_set_options", 00:28:10.883 "params": { 00:28:10.883 "buf_count": 2048, 00:28:10.883 "large_cache_size": 16, 00:28:10.883 "sequence_count": 2048, 00:28:10.883 "small_cache_size": 128, 00:28:10.883 "task_count": 2048 00:28:10.883 } 00:28:10.883 } 00:28:10.883 ] 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "subsystem": "bdev", 00:28:10.883 "config": [ 00:28:10.883 { 00:28:10.883 "method": "bdev_set_options", 00:28:10.883 "params": { 00:28:10.883 "bdev_auto_examine": true, 00:28:10.883 "bdev_io_cache_size": 256, 00:28:10.883 "bdev_io_pool_size": 65535, 00:28:10.883 "iobuf_large_cache_size": 16, 00:28:10.883 "iobuf_small_cache_size": 128 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "bdev_raid_set_options", 00:28:10.883 "params": { 00:28:10.883 "process_max_bandwidth_mb_sec": 0, 00:28:10.883 "process_window_size_kb": 1024 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "bdev_iscsi_set_options", 00:28:10.883 "params": { 00:28:10.883 "timeout_sec": 30 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "bdev_nvme_set_options", 00:28:10.883 "params": { 00:28:10.883 "action_on_timeout": "none", 00:28:10.883 "allow_accel_sequence": false, 00:28:10.883 "arbitration_burst": 0, 00:28:10.883 "bdev_retry_count": 3, 00:28:10.883 "ctrlr_loss_timeout_sec": 0, 00:28:10.883 "delay_cmd_submit": true, 00:28:10.883 "dhchap_dhgroups": [ 00:28:10.883 "null", 00:28:10.883 "ffdhe2048", 00:28:10.883 "ffdhe3072", 00:28:10.883 "ffdhe4096", 00:28:10.883 "ffdhe6144", 00:28:10.883 "ffdhe8192" 00:28:10.883 ], 00:28:10.883 "dhchap_digests": [ 00:28:10.883 "sha256", 00:28:10.883 "sha384", 00:28:10.883 "sha512" 00:28:10.883 ], 00:28:10.883 "disable_auto_failback": false, 00:28:10.883 "fast_io_fail_timeout_sec": 0, 00:28:10.883 "generate_uuids": false, 00:28:10.883 "high_priority_weight": 0, 00:28:10.883 "io_path_stat": false, 00:28:10.883 "io_queue_requests": 512, 00:28:10.883 "keep_alive_timeout_ms": 10000, 00:28:10.883 "low_priority_weight": 0, 00:28:10.883 "medium_priority_weight": 0, 00:28:10.883 "nvme_adminq_poll_period_us": 10000, 00:28:10.883 "nvme_error_stat": false, 00:28:10.883 "nvme_ioq_poll_period_us": 0, 00:28:10.883 "rdma_cm_event_timeout_ms": 0, 00:28:10.883 "rdma_max_cq_size": 0, 00:28:10.883 "rdma_srq_size": 0, 00:28:10.883 "reconnect_delay_sec": 0, 00:28:10.883 "timeout_admin_us": 0, 00:28:10.883 "timeout_us": 0, 00:28:10.883 "transport_ack_timeout": 0, 00:28:10.883 "transport_retry_count": 4, 00:28:10.883 "transport_tos": 0 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "bdev_nvme_attach_controller", 00:28:10.883 "params": { 00:28:10.883 "adrfam": "IPv4", 00:28:10.883 "ctrlr_loss_timeout_sec": 0, 00:28:10.883 "ddgst": false, 00:28:10.883 "fast_io_fail_timeout_sec": 0, 00:28:10.883 "hdgst": false, 00:28:10.883 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:10.883 "name": "nvme0", 00:28:10.883 "prchk_guard": false, 00:28:10.883 "prchk_reftag": false, 00:28:10.883 "psk": "key0", 00:28:10.883 "reconnect_delay_sec": 0, 00:28:10.883 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:10.883 "traddr": "127.0.0.1", 00:28:10.883 "trsvcid": "4420", 00:28:10.883 "trtype": "TCP" 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "bdev_nvme_set_hotplug", 00:28:10.883 "params": { 00:28:10.883 "enable": false, 00:28:10.883 "period_us": 100000 00:28:10.883 } 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "method": "bdev_wait_for_examine" 00:28:10.883 } 00:28:10.883 ] 00:28:10.883 }, 00:28:10.883 { 00:28:10.883 "subsystem": "nbd", 00:28:10.883 "config": [] 00:28:10.883 } 00:28:10.883 ] 00:28:10.883 }' 00:28:10.883 17:28:57 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:10.883 17:28:57 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:10.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:10.883 17:28:57 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:10.883 17:28:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:10.883 [2024-07-24 17:28:57.113089] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:10.883 [2024-07-24 17:28:57.113199] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100013 ] 00:28:11.142 [2024-07-24 17:28:57.247827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.142 [2024-07-24 17:28:57.343382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.401 [2024-07-24 17:28:57.523371] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:11.968 17:28:58 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:11.968 17:28:58 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:28:11.968 17:28:58 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:11.968 17:28:58 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:11.968 17:28:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.228 17:28:58 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:12.228 17:28:58 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:12.228 17:28:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.228 17:28:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:12.228 17:28:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.228 17:28:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.228 17:28:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:12.485 17:28:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:12.485 17:28:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:12.485 17:28:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:12.485 17:28:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:12.485 17:28:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:12.485 17:28:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:12.485 17:28:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:12.743 17:28:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:12.743 17:28:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:12.743 17:28:58 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:12.743 17:28:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:13.002 17:28:59 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:13.002 17:28:59 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:13.002 17:28:59 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hYe7vSY54n /tmp/tmp.UPA8pA4OPe 00:28:13.002 17:28:59 keyring_file -- keyring/file.sh@20 -- # killprocess 100013 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 100013 ']' 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 100013 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100013 00:28:13.002 killing process with pid 100013 00:28:13.002 Received shutdown signal, test time was about 1.000000 seconds 00:28:13.002 00:28:13.002 Latency(us) 00:28:13.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.002 =================================================================================================================== 00:28:13.002 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100013' 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@969 -- # kill 100013 00:28:13.002 17:28:59 keyring_file -- common/autotest_common.sh@974 -- # wait 100013 00:28:13.261 17:28:59 keyring_file -- keyring/file.sh@21 -- # killprocess 99512 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99512 ']' 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99512 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@955 -- # uname 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99512 00:28:13.261 killing process with pid 99512 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99512' 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@969 -- # kill 99512 00:28:13.261 [2024-07-24 17:28:59.368987] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:13.261 17:28:59 keyring_file -- common/autotest_common.sh@974 -- # wait 99512 00:28:13.828 00:28:13.828 real 0m15.734s 00:28:13.828 user 0m39.036s 00:28:13.828 sys 0m3.250s 00:28:13.828 17:28:59 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:13.828 ************************************ 00:28:13.828 END TEST keyring_file 00:28:13.828 ************************************ 00:28:13.828 17:28:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:13.828 17:28:59 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:28:13.828 17:28:59 -- spdk/autotest.sh@301 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:13.828 17:28:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:13.828 17:28:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:13.828 17:28:59 -- common/autotest_common.sh@10 -- # set +x 00:28:13.828 ************************************ 00:28:13.828 START TEST keyring_linux 00:28:13.828 ************************************ 00:28:13.828 17:28:59 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:28:13.828 * Looking for test storage... 00:28:13.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:13.828 17:28:59 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=6e739a91-f4c2-49d2-9c71-a65eb7d5315b 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:13.828 17:28:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.828 17:28:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.828 17:28:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.828 17:28:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.828 17:28:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.828 17:28:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.828 17:28:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:13.828 17:28:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.828 17:28:59 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.828 17:28:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:13.828 17:28:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:13.828 17:28:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:13.829 /tmp/:spdk-test:key0 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:13.829 17:28:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:13.829 17:28:59 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:13.829 17:28:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:13.829 /tmp/:spdk-test:key1 00:28:13.829 17:29:00 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:13.829 17:29:00 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=100167 00:28:13.829 17:29:00 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:13.829 17:29:00 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 100167 00:28:13.829 17:29:00 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100167 ']' 00:28:13.829 17:29:00 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.829 17:29:00 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:13.829 17:29:00 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.829 17:29:00 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:13.829 17:29:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:14.087 [2024-07-24 17:29:00.070422] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:14.087 [2024-07-24 17:29:00.070516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100167 ] 00:28:14.087 [2024-07-24 17:29:00.208632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.087 [2024-07-24 17:29:00.286118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:15.021 17:29:00 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.021 17:29:00 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:28:15.021 17:29:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:15.021 17:29:00 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.021 17:29:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:15.021 [2024-07-24 17:29:00.980354] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:15.021 null0 00:28:15.021 [2024-07-24 17:29:01.012328] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:15.021 [2024-07-24 17:29:01.012561] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.021 17:29:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:15.021 988723878 00:28:15.021 17:29:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:15.021 351743778 00:28:15.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:15.021 17:29:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=100202 00:28:15.021 17:29:01 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:15.021 17:29:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 100202 /var/tmp/bperf.sock 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 100202 ']' 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:15.021 17:29:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:15.021 [2024-07-24 17:29:01.099901] Starting SPDK v24.09-pre git sha1 dca21ec0f / DPDK 24.03.0 initialization... 00:28:15.021 [2024-07-24 17:29:01.100210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100202 ] 00:28:15.021 [2024-07-24 17:29:01.236530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.279 [2024-07-24 17:29:01.322397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.845 17:29:01 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:15.845 17:29:01 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:28:15.845 17:29:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:15.845 17:29:01 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:16.103 17:29:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:16.103 17:29:02 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:16.363 17:29:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:16.363 17:29:02 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:16.624 [2024-07-24 17:29:02.754344] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:16.624 nvme0n1 00:28:16.624 17:29:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:16.624 17:29:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:16.624 17:29:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:16.624 17:29:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:16.624 17:29:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:16.624 17:29:02 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:17.191 17:29:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:17.191 17:29:03 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:17.191 17:29:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@25 -- # sn=988723878 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 988723878 == \9\8\8\7\2\3\8\7\8 ]] 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 988723878 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:17.191 17:29:03 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:17.449 Running I/O for 1 seconds... 00:28:18.384 00:28:18.384 Latency(us) 00:28:18.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.384 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:18.384 nvme0n1 : 1.01 12444.21 48.61 0.00 0.00 10226.36 3023.59 12511.42 00:28:18.384 =================================================================================================================== 00:28:18.384 Total : 12444.21 48.61 0.00 0.00 10226.36 3023.59 12511.42 00:28:18.384 0 00:28:18.384 17:29:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:18.384 17:29:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:18.643 17:29:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:18.643 17:29:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:18.643 17:29:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:18.643 17:29:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:18.643 17:29:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:18.643 17:29:04 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:18.901 17:29:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:18.901 17:29:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:18.901 17:29:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:18.901 17:29:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:18.901 17:29:05 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:18.901 17:29:05 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:19.159 [2024-07-24 17:29:05.325993] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:19.159 [2024-07-24 17:29:05.326353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1254ea0 (107): Transport endpoint is not connected 00:28:19.159 [2024-07-24 17:29:05.327340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1254ea0 (9): Bad file descriptor 00:28:19.159 [2024-07-24 17:29:05.328336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:19.159 [2024-07-24 17:29:05.328357] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:19.159 [2024-07-24 17:29:05.328366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:19.159 2024/07/24 17:29:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:19.159 request: 00:28:19.159 { 00:28:19.159 "method": "bdev_nvme_attach_controller", 00:28:19.159 "params": { 00:28:19.159 "name": "nvme0", 00:28:19.159 "trtype": "tcp", 00:28:19.159 "traddr": "127.0.0.1", 00:28:19.159 "adrfam": "ipv4", 00:28:19.159 "trsvcid": "4420", 00:28:19.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:19.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:19.160 "prchk_reftag": false, 00:28:19.160 "prchk_guard": false, 00:28:19.160 "hdgst": false, 00:28:19.160 "ddgst": false, 00:28:19.160 "psk": ":spdk-test:key1" 00:28:19.160 } 00:28:19.160 } 00:28:19.160 Got JSON-RPC error response 00:28:19.160 GoRPCClient: error on JSON-RPC call 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@33 -- # sn=988723878 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 988723878 00:28:19.160 1 links removed 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@33 -- # sn=351743778 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 351743778 00:28:19.160 1 links removed 00:28:19.160 17:29:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 100202 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100202 ']' 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100202 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100202 00:28:19.160 killing process with pid 100202 00:28:19.160 Received shutdown signal, test time was about 1.000000 seconds 00:28:19.160 00:28:19.160 Latency(us) 00:28:19.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.160 =================================================================================================================== 00:28:19.160 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100202' 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 100202 00:28:19.160 17:29:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 100202 00:28:19.418 17:29:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 100167 00:28:19.418 17:29:05 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 100167 ']' 00:28:19.418 17:29:05 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 100167 00:28:19.418 17:29:05 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:28:19.418 17:29:05 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:19.418 17:29:05 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 100167 00:28:19.418 killing process with pid 100167 00:28:19.419 17:29:05 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:19.419 17:29:05 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:19.419 17:29:05 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 100167' 00:28:19.419 17:29:05 keyring_linux -- common/autotest_common.sh@969 -- # kill 100167 00:28:19.419 17:29:05 keyring_linux -- common/autotest_common.sh@974 -- # wait 100167 00:28:19.986 00:28:19.986 real 0m6.190s 00:28:19.986 user 0m11.922s 00:28:19.986 sys 0m1.604s 00:28:19.986 ************************************ 00:28:19.986 END TEST keyring_linux 00:28:19.986 ************************************ 00:28:19.986 17:29:05 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:19.986 17:29:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:19.986 17:29:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:28:19.986 17:29:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:19.986 17:29:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:19.986 17:29:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:19.986 17:29:06 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:28:19.986 17:29:06 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:28:19.986 17:29:06 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:28:19.986 17:29:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:19.986 17:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:19.986 17:29:06 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:28:19.986 17:29:06 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:19.986 17:29:06 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:19.986 17:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:21.363 INFO: APP EXITING 00:28:21.363 INFO: killing all VMs 00:28:21.363 INFO: killing vhost app 00:28:21.363 INFO: EXIT DONE 00:28:22.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:22.297 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:22.297 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:22.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:22.916 Cleaning 00:28:22.916 Removing: /var/run/dpdk/spdk0/config 00:28:22.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:22.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:22.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:22.916 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:22.916 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:22.916 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:22.916 Removing: /var/run/dpdk/spdk1/config 00:28:22.916 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:22.916 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:22.916 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:22.916 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:22.916 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:22.916 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:22.916 Removing: /var/run/dpdk/spdk2/config 00:28:22.916 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:22.916 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:22.916 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:22.916 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:22.916 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:22.916 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:22.916 Removing: /var/run/dpdk/spdk3/config 00:28:22.916 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:22.916 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:22.916 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:22.916 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:22.916 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:22.916 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:22.916 Removing: /var/run/dpdk/spdk4/config 00:28:22.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:22.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:22.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:22.916 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:22.916 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:22.916 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:22.916 Removing: /dev/shm/nvmf_trace.0 00:28:22.916 Removing: /dev/shm/spdk_tgt_trace.pid60670 00:28:22.916 Removing: /var/run/dpdk/spdk0 00:28:22.916 Removing: /var/run/dpdk/spdk1 00:28:22.916 Removing: /var/run/dpdk/spdk2 00:28:22.916 Removing: /var/run/dpdk/spdk3 00:28:22.916 Removing: /var/run/dpdk/spdk4 00:28:22.916 Removing: /var/run/dpdk/spdk_pid100013 00:28:22.916 Removing: /var/run/dpdk/spdk_pid100167 00:28:22.916 Removing: /var/run/dpdk/spdk_pid100202 00:28:22.916 Removing: /var/run/dpdk/spdk_pid60525 00:28:22.916 Removing: /var/run/dpdk/spdk_pid60670 00:28:22.916 Removing: /var/run/dpdk/spdk_pid60931 00:28:22.916 Removing: /var/run/dpdk/spdk_pid61029 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61063 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61178 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61208 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61326 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61606 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61776 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61858 00:28:23.190 Removing: /var/run/dpdk/spdk_pid61950 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62040 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62078 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62114 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62175 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62293 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62923 00:28:23.190 Removing: /var/run/dpdk/spdk_pid62987 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63056 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63084 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63163 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63191 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63270 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63298 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63355 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63385 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63431 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63461 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63613 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63643 00:28:23.190 Removing: /var/run/dpdk/spdk_pid63718 00:28:23.190 Removing: /var/run/dpdk/spdk_pid64147 00:28:23.190 Removing: /var/run/dpdk/spdk_pid64497 00:28:23.190 Removing: /var/run/dpdk/spdk_pid66906 00:28:23.190 Removing: /var/run/dpdk/spdk_pid66951 00:28:23.190 Removing: /var/run/dpdk/spdk_pid67266 00:28:23.190 Removing: /var/run/dpdk/spdk_pid67316 00:28:23.190 Removing: /var/run/dpdk/spdk_pid67673 00:28:23.190 Removing: /var/run/dpdk/spdk_pid68211 00:28:23.190 Removing: /var/run/dpdk/spdk_pid68659 00:28:23.191 Removing: /var/run/dpdk/spdk_pid69639 00:28:23.191 Removing: /var/run/dpdk/spdk_pid70620 00:28:23.191 Removing: /var/run/dpdk/spdk_pid70742 00:28:23.191 Removing: /var/run/dpdk/spdk_pid70810 00:28:23.191 Removing: /var/run/dpdk/spdk_pid72279 00:28:23.191 Removing: /var/run/dpdk/spdk_pid72560 00:28:23.191 Removing: /var/run/dpdk/spdk_pid75896 00:28:23.191 Removing: /var/run/dpdk/spdk_pid76278 00:28:23.191 Removing: /var/run/dpdk/spdk_pid76876 00:28:23.191 Removing: /var/run/dpdk/spdk_pid77277 00:28:23.191 Removing: /var/run/dpdk/spdk_pid82594 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83033 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83143 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83290 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83340 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83381 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83427 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83585 00:28:23.191 Removing: /var/run/dpdk/spdk_pid83738 00:28:23.191 Removing: /var/run/dpdk/spdk_pid84002 00:28:23.191 Removing: /var/run/dpdk/spdk_pid84123 00:28:23.191 Removing: /var/run/dpdk/spdk_pid84378 00:28:23.191 Removing: /var/run/dpdk/spdk_pid84503 00:28:23.191 Removing: /var/run/dpdk/spdk_pid84620 00:28:23.191 Removing: /var/run/dpdk/spdk_pid84957 00:28:23.191 Removing: /var/run/dpdk/spdk_pid85406 00:28:23.191 Removing: /var/run/dpdk/spdk_pid85711 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86201 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86208 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86553 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86567 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86588 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86616 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86631 00:28:23.191 Removing: /var/run/dpdk/spdk_pid86976 00:28:23.191 Removing: /var/run/dpdk/spdk_pid87019 00:28:23.191 Removing: /var/run/dpdk/spdk_pid87355 00:28:23.191 Removing: /var/run/dpdk/spdk_pid87605 00:28:23.191 Removing: /var/run/dpdk/spdk_pid88097 00:28:23.191 Removing: /var/run/dpdk/spdk_pid88683 00:28:23.191 Removing: /var/run/dpdk/spdk_pid90034 00:28:23.191 Removing: /var/run/dpdk/spdk_pid90624 00:28:23.191 Removing: /var/run/dpdk/spdk_pid90633 00:28:23.191 Removing: /var/run/dpdk/spdk_pid92557 00:28:23.191 Removing: /var/run/dpdk/spdk_pid92647 00:28:23.191 Removing: /var/run/dpdk/spdk_pid92732 00:28:23.191 Removing: /var/run/dpdk/spdk_pid92828 00:28:23.191 Removing: /var/run/dpdk/spdk_pid92985 00:28:23.191 Removing: /var/run/dpdk/spdk_pid93070 00:28:23.191 Removing: /var/run/dpdk/spdk_pid93160 00:28:23.191 Removing: /var/run/dpdk/spdk_pid93251 00:28:23.191 Removing: /var/run/dpdk/spdk_pid93595 00:28:23.191 Removing: /var/run/dpdk/spdk_pid94273 00:28:23.191 Removing: /var/run/dpdk/spdk_pid95638 00:28:23.191 Removing: /var/run/dpdk/spdk_pid95838 00:28:23.191 Removing: /var/run/dpdk/spdk_pid96136 00:28:23.191 Removing: /var/run/dpdk/spdk_pid96436 00:28:23.191 Removing: /var/run/dpdk/spdk_pid96985 00:28:23.191 Removing: /var/run/dpdk/spdk_pid96994 00:28:23.191 Removing: /var/run/dpdk/spdk_pid97356 00:28:23.191 Removing: /var/run/dpdk/spdk_pid97514 00:28:23.191 Removing: /var/run/dpdk/spdk_pid97671 00:28:23.191 Removing: /var/run/dpdk/spdk_pid97765 00:28:23.191 Removing: /var/run/dpdk/spdk_pid97923 00:28:23.191 Removing: /var/run/dpdk/spdk_pid98032 00:28:23.449 Removing: /var/run/dpdk/spdk_pid98700 00:28:23.449 Removing: /var/run/dpdk/spdk_pid98734 00:28:23.449 Removing: /var/run/dpdk/spdk_pid98765 00:28:23.449 Removing: /var/run/dpdk/spdk_pid99019 00:28:23.449 Removing: /var/run/dpdk/spdk_pid99053 00:28:23.449 Removing: /var/run/dpdk/spdk_pid99086 00:28:23.449 Removing: /var/run/dpdk/spdk_pid99512 00:28:23.449 Removing: /var/run/dpdk/spdk_pid99547 00:28:23.449 Clean 00:28:23.449 17:29:09 -- common/autotest_common.sh@1451 -- # return 0 00:28:23.449 17:29:09 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:28:23.449 17:29:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:23.449 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:28:23.449 17:29:09 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:28:23.449 17:29:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:23.449 17:29:09 -- common/autotest_common.sh@10 -- # set +x 00:28:23.449 17:29:09 -- spdk/autotest.sh@391 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:23.449 17:29:09 -- spdk/autotest.sh@393 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:23.449 17:29:09 -- spdk/autotest.sh@393 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:23.449 17:29:09 -- spdk/autotest.sh@395 -- # hash lcov 00:28:23.449 17:29:09 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:23.449 17:29:09 -- spdk/autotest.sh@397 -- # hostname 00:28:23.449 17:29:09 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:23.707 geninfo: WARNING: invalid characters removed from testname! 00:28:45.623 17:29:31 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:48.903 17:29:35 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:52.184 17:29:37 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:54.092 17:29:40 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:56.625 17:29:42 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.158 17:29:45 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:01.691 17:29:47 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:01.691 17:29:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:01.691 17:29:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:01.691 17:29:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.691 17:29:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.691 17:29:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.691 17:29:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.691 17:29:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.691 17:29:47 -- paths/export.sh@5 -- $ export PATH 00:29:01.691 17:29:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.691 17:29:47 -- common/autobuild_common.sh@446 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:01.691 17:29:47 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:01.691 17:29:47 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721842187.XXXXXX 00:29:01.691 17:29:47 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721842187.UOWEGh 00:29:01.691 17:29:47 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:01.691 17:29:47 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:29:01.691 17:29:47 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:01.691 17:29:47 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:01.691 17:29:47 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:01.691 17:29:47 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:01.691 17:29:47 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:29:01.691 17:29:47 -- common/autotest_common.sh@10 -- $ set +x 00:29:01.691 17:29:47 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:29:01.691 17:29:47 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:01.691 17:29:47 -- pm/common@17 -- $ local monitor 00:29:01.691 17:29:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.691 17:29:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:01.691 17:29:47 -- pm/common@25 -- $ sleep 1 00:29:01.691 17:29:47 -- pm/common@21 -- $ date +%s 00:29:01.691 17:29:47 -- pm/common@21 -- $ date +%s 00:29:01.691 17:29:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721842187 00:29:01.691 17:29:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721842187 00:29:01.691 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721842187_collect-vmstat.pm.log 00:29:01.691 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721842187_collect-cpu-load.pm.log 00:29:02.627 17:29:48 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:02.627 17:29:48 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:02.627 17:29:48 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:02.627 17:29:48 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:02.627 17:29:48 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:02.627 17:29:48 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:02.627 17:29:48 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:02.627 17:29:48 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:02.627 17:29:48 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:02.886 17:29:48 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:02.886 17:29:48 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:02.886 17:29:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:02.886 17:29:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:02.886 17:29:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.886 17:29:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:02.886 17:29:48 -- pm/common@44 -- $ pid=101881 00:29:02.886 17:29:48 -- pm/common@50 -- $ kill -TERM 101881 00:29:02.886 17:29:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:02.886 17:29:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:02.886 17:29:48 -- pm/common@44 -- $ pid=101883 00:29:02.886 17:29:48 -- pm/common@50 -- $ kill -TERM 101883 00:29:02.886 + [[ -n 5273 ]] 00:29:02.886 + sudo kill 5273 00:29:02.896 [Pipeline] } 00:29:02.916 [Pipeline] // timeout 00:29:02.922 [Pipeline] } 00:29:02.940 [Pipeline] // stage 00:29:02.946 [Pipeline] } 00:29:02.965 [Pipeline] // catchError 00:29:02.975 [Pipeline] stage 00:29:02.978 [Pipeline] { (Stop VM) 00:29:02.993 [Pipeline] sh 00:29:03.272 + vagrant halt 00:29:06.558 ==> default: Halting domain... 00:29:13.125 [Pipeline] sh 00:29:13.405 + vagrant destroy -f 00:29:16.689 ==> default: Removing domain... 00:29:16.960 [Pipeline] sh 00:29:17.240 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:29:17.247 [Pipeline] } 00:29:17.263 [Pipeline] // stage 00:29:17.267 [Pipeline] } 00:29:17.281 [Pipeline] // dir 00:29:17.286 [Pipeline] } 00:29:17.300 [Pipeline] // wrap 00:29:17.305 [Pipeline] } 00:29:17.320 [Pipeline] // catchError 00:29:17.329 [Pipeline] stage 00:29:17.330 [Pipeline] { (Epilogue) 00:29:17.342 [Pipeline] sh 00:29:17.627 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:24.205 [Pipeline] catchError 00:29:24.207 [Pipeline] { 00:29:24.222 [Pipeline] sh 00:29:24.503 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:24.503 Artifacts sizes are good 00:29:24.511 [Pipeline] } 00:29:24.530 [Pipeline] // catchError 00:29:24.541 [Pipeline] archiveArtifacts 00:29:24.547 Archiving artifacts 00:29:24.728 [Pipeline] cleanWs 00:29:24.740 [WS-CLEANUP] Deleting project workspace... 00:29:24.740 [WS-CLEANUP] Deferred wipeout is used... 00:29:24.746 [WS-CLEANUP] done 00:29:24.748 [Pipeline] } 00:29:24.761 [Pipeline] // stage 00:29:24.767 [Pipeline] } 00:29:24.783 [Pipeline] // node 00:29:24.789 [Pipeline] End of Pipeline 00:29:24.825 Finished: SUCCESS